A Quantum-Inspired Sensor Consolidation Measurement Approach for Cyber-Physical Systems

Cyber-Physical System (CPS) devices interconnect to grab data over a common platform from industrial applications. Maintaining immense data and making instant decision analysis by selecting a feasible node to meet latency constraints is challenging. To address this issue, we design a quantum-inspired online node consolidation (QONC) algorithm based on a time-sensitive measurement reinforcement system for making decisions to evaluate a feasible node, ensuring reliable service and deploying the node at the appropriate position for accurate data computation and communication. We design an Angular-based node position analysis method to localize the node through rotation and t-gate usage to mitigate latency and enhance system performance. We formalize the estimation and selection of the feasible node based on quantum formalization node parameters (node contiguity, node optimal knack rate, node heterogeneity, probability of fusion variance error ratio). We design a fitness function to assess the probability of node fitness before selection. The simulation results show that our approach achieves an effective measurement rate of performance index by reducing the average error ratio from 0.17-0.22, increasing the average coverage ratio from 29% to 42%, and the qualitative execution frequency of services. Moreover, the proposed model achieves a 74.3% offloading reduction accuracy and a 70.2% service reliability rate compared to state-of-the-art approaches. Our system is scalable and efficient under numerous simulation frameworks.


I. INTRODUCTION
C YBER-physical systems (CPS), which combine sensor frameworks, as well as computational and physical components, are employed to control automated environments and fulfill the demands of the Industry 4.0 paradigm [1].This paradigm entails coordinating computation, communication, and control mechanisms through conditional probabilistic networks or graph theory.However, the lack of integration among decision models within the environment introduces varying levels of complexity.This may consume abnormal energy to maintain network services and causes an unusual learning ratio through classical computing-based machine learning techniques.Moreover, data generation is increasing as CPS application deployments increase.The lack of adequate edge computing environments hinders achieving optimal quality-of-services (QoS) for CPS applications.A detailed investigation report is presented in [2] regarding the QoS optimization perspective; In this survey, the authors provide a concise overview of the research gaps in areas such as latency, energy usage, reliability, and privacy concerning CPS applications.The next-generation network demands (enormous resources and security standards) may resolve using learning models with a backbone of quantum computing [3].Theoretical quantum computing is an optimal solution for meeting the needs of 5G communication networks by cloning the quantum mechanism [4] on network nodes to optimize the communication and computation overhead.As a result, designing a quantum decision-making method is challenging, but it may help choose the right node to execute the offloading services at the network edge.
Sensor localization and its operational decisions are prominent factors for enhancing the lifetime of CPS frameworks.Let us assume s × a network area is considered to formulate the listed issues, where s refers to a set of sensors and a refers to a set of anchors, respectively.The main challenge is sensor classification based on their activities through a quantum mechanism by considering each cycle's data.Selection of activities such as inactive (switch-off) sensors and classification of low area and high coverage sensors impact network service reliability, network lifetime, and network energy usage.Node localization technologies are primarily categorized into two types; first, range-based approaches [5], which effectively work based on Angle of Arrival (AoA) [6], Time of Arrival (ToA) [7], Time Difference of Arrival (TDoA).Second, range-free-based approaches [8], [9] work effectively based on connectivity information among neighbouring devices.Existing methods effectively execute services through classical computing which is time-consuming.Recently, automated vehicle network technology has received attention globally; therefore, we focused on designing a range-free approach based on quantum machine learning to optimize the overhead of communication and computation services due to the vast usage of smart devices.
There is a need to design and develop algorithms for continuous updates of residual-energy level, sensor contiguity rate, optimal knock rate, and novel distance measurement.The algorithm should in theory iteratively collect node-centric data after each cycle helping to make an accurate decision, but it is an NP-hard problem.However, few metaheuristic algorithms have been developed to resolve these issues [10].For instance, Genetic Algorithms (GA), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Gravitational Search Algorithm (GSA) are well-known metaheuristic algorithms that are widely used in CPS applications [11] however require substantial time to compute and process.Another drawback is that extending the problem's search space requires converging on optimal solutions.In this regard, we design a quantum-inspired multi-objective data fusion measurement approach to enhance the sensor state's quantization ratio by leveraging sensor coverage range, efficiency, and specific node-centric metrics.
Motivation: Maintaining and analyzing large amounts of data to make an instant decision by an edge server or vehicle node is an essential task in vehicular networks.We considered an abnormal scenario where the computation-intensive tasks offload to a suitable server or device to meet the application deadline.In this process, a crucial phase is selecting a potential device and locating a device in the right place.In line with this issue, two algorithms are developed based on quantum theory, irrespective of classical machine learning.The selection and localization mechanism is divided into three tasks (measuring node-centric parameters, quantum state estimation, and data fusion optimization).In our constructions, Algorithm 1 assesses the rotation angle magnitude to assess the rotation direction of the vehicle and distance based on node fitness, node degree, normalization value, and qubit register probability for effective location analysis subject to data fusion (Theorem 2).Consequently, Algorithm 2 assesses node selection based on quantum state estimation (Theorem 1) subject to node contiguity, node optimal knack rate, and node heterogeneity parameters for initial selection of feasible nodes.Moreover, we design a probability-based fitness function to finalize node selection based on quantum state estimation, which can be observed in Fig. 1.Simultaneously, RSU/server localization is formulated by extending the above measurements along with directionality.In this regard, we design an angular-based node position analysis method to localize the node through rotation and t-gate usage to mitigate latency and enhance system performance.MATLAB and Qiskit are considered to estimate the probability of each vehicle state.The node consolidation issue is formulated by dividing it into two-sub tasks (node selection and deployment) subject to the probabilities of qubits, quantum state, node contiguity, node optimal knack rate, node heterogeneity, and probability of node fitness.Our contributions are listed as follows: 1) Develop a quantum-inspire online node consolidation (QONC) algorithm based on a time-sensitive measurement reinforcement system through a quantum-inspired node-centric measurement method and fitness function to evaluate the feasible node for ensuring reliable service by deploying the node at the appropriate position for accurate data computation and communication.2) Develop an angular-based node position analysis method to localize the node through rotation and T-gate to mitigate the latency and enhance the system performance.3) Conduct simulations based on MATLAB with QUBIT V5.8 and Qiskit to assess the proposed system performance.Table I provides a brief examination and amendment of the functionalities of the tools and technologies, offering valuable assistance to research scholars.The rest of this article is organized as follows.Section II briefly describes research gaps and issues of extant approaches.Section IV describes the proposed system and its mathematical model with a novel algorithm in detail.Section V evaluates the outcome examination, and Section VI concludes the manuscript.

II. RELATED WORK
Recent research works are examined in line with the article's objectives which are classified into two subsections.

A. Accurate Data Fusion Importance Based on Quantum Theory
Usually, effective data fusion impacts system performance by optimizing the energy consumption, data compression, and transmission overhead of CPS systems; since all report to the sink node.We believe that making the sensors inactive, which Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE I TOOLS AND TECHNOLOGIES COMPARATIVE STUDY
is not feasible to fulfill the system requirements, impacts the communication and computation overhead of the system.Therefore, selecting the abnormal sensor, device, or node based on node-centric measurements is challenging.In line with this motivation, we investigated recently published articles to resolve this issue based on quantum theory by estimating each node's probability state.
In [12], a butterfly quantum network is developed to estimate the state of each qubit using communication channel auxiliary data in the process of target node analysis.A quantum cooperative multicast (QCM) model is developed to optimize the network transmission losses [13].However, these models are not optimal for lightweight communication networks.
In [14], efficient resource allocation is designed based on quantum states using the Quantum-based particle swarm optimization method.The authors concentrated on diminishing the network iteration to achieve high performance.In [8], the node density and node coverage rate-based swarm optimization algorithm is designed to resolve the node deployment using quantum theory (NDQ) and is treated as A3 for comparative analysis.In continuation, a Quantum-inspired green communication framework for Energy Balancing in sensor-enabled CPS systems (Q-EBIT) is developed using a single receiving angle to localize the node in the network [9], and it is treated as A2 for comparative analysis.In [15], node energy consumption and hop count attributes are considered with heuristic and deterministic methods designed based on quantum theory to resolve the CPS framework node searching issues.Subsequently, a quantum-inspired genetic algorithm is proposed called quantum approximate optimization algorithm (QAOA) for effective service execution based on policy gradient learning strategy to resolve the device-centric issues based on hypothesis data collected and processed asynchronously to improve the system performance accuracy [16] and is treated as A1 for comparative analysis.In [17], the researchers design a machine learning architecture-based novel quantum service allocation model to address the next-generation network gaps.This algorithm called the qubit-preserved shortest path algorithm (DQRA), assesses the reward function to increase the service execution rate in each routing window.

B. Sensor Selection Approaches
In [18], the authors measure an Expected hop Progress (EP) based distance through two nodes.However, the system is unsuitable for complex systems because of the longer-range coverage rate.In [19], researchers develop a novel scheduler with window constraints for routers to address communication issues related to router allocation problems and emergency data allocation.The scheduler shows promising results, achieving an average running time of 0.209 seconds.However, it is important to consider the complexity aspect, as the model includes four designed algorithms.In [20], researchers develop a role-aware hypergraph based on hypergraphs and an attention mechanism to consolidate the nodes' role and achieve a 12.1% accuracy improvement in node classification.In [21], the researchers design an Anchor Pairs Condition (APC) analysis method to assess the distance relation among anchor nodes for effective localization.Those nodes are classified as optimal/suboptimal based on different distance measurements.In [22], the researchers design a novel localization method for node localization by measuring the horizontal distance based on the angle of an arrived signal.In [23], the authors develop a power-efficient node Localization (PENL) approach based on Neural Network (NN) for effective distance measurement among devices, which does not depend on several hops counts.It measures the distance based on an anisotropic signal.However, the extent of algorithm performances is degenerative due to many node-centric factors, especially in sparse networks, as is summarized in Table II.Therefore, a robust range-free localization algorithm is essential for effective autonomous vehicle orchestration based on quantum theory.

III. PRELIMINARIES
The research community has shown increasing interest in quantum computing in recent years, as it offers potential improvements in performance metrics such as latency and reliability while maintaining low complexity similar to classical models.However, selecting the appropriate implementation platform remains a challenging task.One notable option is the D-wave 2000Q1 , which boasts 2000 qubits.Additionally, IBM2 has announced plans to release a machine based on 50 qubits by 2020, although it is still in the development phase.Moreover, each qubit resembles a super state and after measurement, the final state would be 0, 1 like a classical binary bit.A two-dimension state refer as |0 , |1 and its each state probability (a, b) is Assume a two (n = 2) bits quantum register with their state probabilities as follows.
The normalized sum of probabilities equals 1, and the measurement of the 3-qubit register proceeds as described below.
2 .The probability of three qubits is

A. Communication Model
In this article, we have examined a similar experimental environment as given in [26], [27].To evaluate the signal-to-noise ratio, we have employed the default method where tp com i represents the transmission power with limits defined as g com i represents the channel gain, σ 2 denotes the channel noise, and ω com i represents the bandwidth.The transmission rate between the node and RSU is defined using the Shannon-Hartley formula as The probability of the communication channel is updated based on the probabilities of transmission rate, node/vehicle direction, coverage, and angle, as expressed by ψ l,l+1|k ← ρ[transmission, direction, coverage, angle].The objective is to ensure that this probability is below a threshold value to achieve the desired goals.

B. Computation Model
The execution of services typically occurs locally on the On-Board Unit (OBU).However, when dealing with computationintensive tasks, the services are offloaded to a server or cloud, which is a fundamental strategy.As a result, the latency of local computations is defined as required resources for service execution φ l (resources of lth OBU or node) The definition of server-side computation latency is as follows, and it must not exceed the threshold value.i.e, τ l,l+1|k ≤ τ thr .τ l,l+1|k i = Size of the service(bits) Node consolidation involves the validation of a node for inclusion within a framework that facilitates data fusion.This process is crucial for ensuring the precise design of a decision-making system in autonomous vehicle systems.As next-generation automated vehicle frameworks become prevalent, they accommodate a vast number of active edge devices, actuators, and the Internet of Things (IoT).These multi-dimensional physical spaces present resource capacity challenges.Additionally, traditional machine learning techniques fall short of meeting the learning requirements of industrial applications due to limited data analytics, exploration, and exploitation capabilities.To fulfill these requirements, the service optimization problem can be formulated as follows: Subject to: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
C 5 : Fusion variance χ pe s ≤ χ thr s,pe The primary objective of this article is to address node selection challenges in service offloading, with a specific focus on achieving low latency and efficient resource utilization, where ρ ensures the probability of each variable.Several recent publications and researchers have made notable efforts to tackle this problem satisfactorily.Our approach involves formulating the optimization problem by evaluating the qubit state probability of each dimension, as well as angle measurement and considering the impact of vehicle mobility on network performance.The node selection and deployment problem is formulated using theoretical and mathematical principles.A set of constraints is defined, and it is crucial to ensure that all these variables remain below their respective threshold values.However, due to the inclusion of binary decision bits, the optimization problem becomes non-convex, resulting in an exponential increase in function complexity (likely 2 N ) with the size of space represented by C 1 , C 2 and C 5 .These factors are evident that this issue is similar to the NP-hard problem.Moreover, the article highlights the quantum computing mechanism in line with the objectives.

IV. QUANTUM-INSPIRED SENSOR CONSOLIDATION MECHANISM
Let us assume that a CPS framework enables a set of randomly deployed sensors S in a × a network area with density ρ = S (a×a) , R communication radius.We can say that s l e , where (l = 1, 2, . . ., L) denotes a set of GPS-enabled sensors called activenodes, and s k c , where (k = 1, 2, . . ., K) denotes a set of hyper nodes s c = S − s e that may perform both relay services and computation services as per the demand where the service arrival rate is λ i for l th vehicle.The sensor cluster is formulated based on their distance, where paired sensors' reliability is essential to execute the services with low latency by following a novel offloading strategy with a measurement index.For more information please refer the research conducted in [24], [28].This section further subdivides based on context such as quantum angular theory for node coordinates analysis, fusion variance measurement, node selection strategy, node mobility impact on node selection, and proposed approach complexity.

A. Quantum Angular-Based Node Coordinates Analysis
Reliability plays an important role to establish the communication and computation path by selecting the sensor pair which have the maximum high-reliability value including relay sensors.The service reliability is measured with (3).
Deriving the possible location of the relay node is essential since its location coordinates and location angle are unaware.Let us assume that two active sensors (S e l , S e l+1 ) are deployed with a maximum communication range of r i l,k R, r i l+1,k R respectively.The distance between the relay node and an active node is calculated using (4).
Here, ϑ(ϑ ⊂ (−π, π]) denotes the phase of each qubit; however, the ith qubit phase is ϑ i = arctan(x i /y i ).Each qubit phase position may occupy any quarter among the four as follows.Qubit phase position ϑ i is illustrated in Fig. 2. The Quantum-based Sensor Angle Position (QSAP) measurement index helps to place the regular node at the right place by estimating the feasibility of each qubit concerning the sensor position.The quantum rotation gate plays an important role in qubit calculation and is derived as follows.The following formula represents a qubit which rotates θ k l degrees from the original vector (x i , y i ) → (x i , y i ) T .
θ k l is the rotation angle, and it is calculated with (6) where τ denotes iterations count, = 4 × exp( −τ τ max ) represents rotation step to control the rotation speed; and τ max denotes maximum number of iterations.The calculation of the node direction is highly essential and is defined as the function p(x i , y i ) as follows. where Here, d be l,k , d pre l,k , ϑ be l,k , ϑ pre l,k are the probability of the ith qubit best and predicted resolved coordinates, respectively.Finally, if p(x i , y i ) < 0, the θ k l , (k = 1, 2, . . ., K) rotates clockwise;  otherwise, it rotates anticlockwise to place the sensor node at the reliable position.
Let us define the qubit of every sensor S and assume every qubit is an attribute to assess the state of every sensor-that is, Q = (q 1 , q 2 , . . ., q s , . . ., q n ), ∴ s e l ∪ s k c ⊆ S.
where the supervision state is normalized based on derivation described in Section III, and here, n is the number of qubits; such as The quantum rotation gate helps to localize the ith bit of the sth individual sensor as q s is described with (9).
Each state i from 1 to n updates the bit location using the coordinates ( x i l,k , y i l,k ) T .The rotation angle plays a crucial role in the sensor selection process, determined as follows.
Here, p(x i l,k , y i l,k ) represents a control function that assesses the direction of rotation.The rotation angle magnitude is denoted by i l,k .The conditions corresponding to the rotation angle can be found in Table III, which lists all feasible solutions.Fig. 3 illustrates the derived position g i s and predicted position g i s of the relay sensor.Subsequently, a binary solution is essential to cross-verify the system's performance.Let us assume that the probability of the current position qubit x i is compared with a random number ran, ∀ 0 ≤ ran ≤ 1.The binary solution of the sensor state is defined with (10).
The binary solution set is which is derived based on above equation.Algorithm 1 measures the status of hybrid sensors for effective device location through quantum theory.The device deployment at the proper position based on novel measurements affects the system's latency and performance.Unlike the classical method, we design a quantum computing-based node-centric measurement system based on distance, hop count, and coordinates for direction analysis to select the feasible sensor set.In this process, line 1 initializes the demanded attributes, and the selecting node should not be the active node, and lines 5-8 assess the expected position subject to angle, distance and data variance.The hyper sensor set is updated in ascending order based on the distance and degree of a node/vehicle.Subsequently, the second objective is a resolved-based Algorithm 2. If the hyper node is the same as the active node, then update the location coordinates based on the Monte Carlo method through cognition coefficient weight .If ≥ 0.5, assess the position as follows; else, process line-16.
Once the device position is assessed and after that resolves the second target which is based on quantum state estimation as follows

B. Fusion Variance Measurement for Quantum State Estimation
Quantum state estimation is an essential part of quantum theory which is carried out with three process steps (measurement, analysis and reconstruction) can be observed in Fig. 4. Our objective is to strengthen the measurement process because the reconstruction depends on the outcome of the measurement process.The summation of qubit probabilities plays an essential role in finalizing the state of the sensor to accomplish sensor consolidation.Note, enhancing the tomography accuracy means achieving a minimum error rate between the expected and current true states.
Assuming the quantum state reconstruction as λ, based on n attribute measurements (specifically device contiguity, device optimal knock ratio, and device heterogeneity treated as three qubits), we can define each device as having n channels to collect data for each state.In this context, the operator {π i } n i=1 optimizes the measurements of each sensor device, and the current state probability of the ith qubit can be represented as o i = tr(λπ i ).To mitigate redundancy and iterative occurrence data, we construct a frequency matrix using a vector The estimation error probability state, denoted as O pee s , is evaluated as , and the covariance matrix between V s and V s + 1 is given by Cs, s + 1 = b[O pee s (Os + 1 pee ) T ].In the case where devices s and s + 1 are independent, C s,s+1 is zero.The sensor weight matrix W s is defined as follows: where b = [I m , . . ., I m ] T and Here, b is an identity matrix with an order of b, and c is an error covariance matrix.
b) −1 is estimated through linear least mean square error.
Proof: Let us consider a defined method for precisely determining the probability state of a node by integrating unbiased data from sensor nodes as follows According to the single attribute and multiple attribute fusions, the probabilities are defined as Moreover, we have a true weighting matrix W s S s=1 Therefore, the data fusion measurement error is redefined as Subsequently, the data fusion variance is similar to χ = b[(O pee ) T (O pee )] and it can be defined as Based on (17), the true variance is as follows As per the above equations b T W T = I m , the matrix's can be expressed as follows According to (19), the matrix is rewritten as Here, W = (b T ( C) weighting matrix, and According to (15) and 13, and

and the legitimated true data fusion probability is
Corollary 1: Assume data fusion covariance matrix C s,s+1 = 1 when n attributes and s sensors measurements are identical then the O ouf is as follows the weighting matrix is defined as when the measurements are different, the variance matrix is followed by In case C s > 0, the ( 23) is defined as Thus the ( 15) is a conditional true optimal probability, but still, it is quite complex when the sensor measurement attributes dimensions degree is high, and However, the following Theorem 2 resolves the data loss and true data fusion issues.Theorem 2: According to the variance matrix of T ] and the covariance matrix between V s and which is estimated as per the sensor performance index, and the sub-optimal true probability is defined as here the weights are defined as based on Thus, the weighting coefficients [w 1 , w 2 , . . ., w S ] w and the variance matrix is written as Proof: The single attribute device is called a single measurement device and vice versa.The expected data fusion measurements are considered as follows: The data fusion measurement error is derived as The subsequent fusion variance matrix is Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
= S s,s+1=0 and the variance is According to (20), an increase in the number of qubits resulted in a corresponding increase in the computational complexity.Consequently, the measurement of each sensor state's probability becomes more complex when there is an increase in attributes or qubits during a state assessment.However, our gradient model helps improve computational efficiency despite these complexities.
Corollary 2: If the values of n attributes and s sensor measurements do not match, the covariance matrix C s,s+1 for data fusion is assumed to be zero.In this case, the value of O ouf can be determined as follows: Legitimately, we have weight matrix and the corresponding variance matrix is defined as Subsequent variance is calculated as Proof: For C s,s+1 = 0 (s = s + 1), we have By substituting the aforementioned equation, the resulting weight and variance matrix can be obtained as follows.
Consequently, the estimated variance of data fusion for each measurement is calculated as follows.
The developed fusion mechanism, utilizing scalar and matrix weighting coefficients, is based on linear least mean square error.This mechanism aims to differentiate the probability of each qubit and estimate the potential of the sensor to address computation and communication overhead challenges associated with sensor states.The diagonal elements of C s are utilized to indicate the measurement accuracy of device s.Algorithm 2 assesses the quantum state to select the RSU or server by measuring node contiguity, node optimal knack rate, and node heterogeneity parameters.In this process, line 1 initializes three quantum bits (contiguity ratio, knack ratio, and heterogeneity weight).Moreover, each node's fitness weight is estimated before its selection.Line 4 assesses the contiguity ratio, and line 6 assesses the knack ratio and heterogeneity weight with line 7.The detailed quantum state estimation is derived in the below Subsection IV-B1, which helps to conclude the quantum register state with probability ratio.As per the demand of qubits, the vehicle would be selected.The fitness function is applied to select the sensor if η s ≥ 0.5.

1) Iterative Data Fusion Analysis Using Quantum Gates:
To assess the quantum register state of each, we used a series of gates to manipulate the qubits in the register and then measure the resulting state, including T-gate and H-gate.The H-gate suits individual qubits within the register to construct a balanced mixture of computational basis states, making it easier to probe and analyze the quantum state.The resulting superposition states can be measured to obtain statistical information about the quantum register's state.The T-gate is a fundamental gate used for phase estimation.T-gate is employed on selected qubits to adjust the relative phases between the qubits' superposition states.This phase control is vital for performing accurate quantum calculations and extracting meaningful results.
In our simulation, three qubits are considered.To create a superposition in a single qubit, we used H − gate to the qubit for a lightweight environment.This puts the qubit in a state where it has a 50% chance of being in the | 1 state and a 50% chance of being in the | 0 state.For complex environment calculations, we combined multiple gates, including T − gates and H − gates, in a controlled and coherent manner to help the quantum state to map the classical states (0 or 1).
The probability-based normalization process is employed while assessing the qubit's state.In continuation, let us assume three qubits with eight dimensions, and the best dimension for the node probability determines which node is chosen.
To complete the node selection, let us assume that the first qubit has an elevated weight, meaning that the 1st bit is equal to 1, and the corresponding probability is However, their probability is not equal to 1; therefore, we apply the normalization formula as follows by dividing them by their probability.
Now the normalization factor is It confines that, the 3-qubit quantum register is in . Fig. 5 illustrates each sensor's data fusion analysis, which plays an important role in assessing the sensor state.T-gate and H-gate are considered to assess the quantum register state at each iteration based on Algorithms 1 and 2. Fig. 6(a) illustrates the probabilities of each dimension, as per the demand and weight of qubit, the sensor would be selected, and detailed simulation outcomes are described in Section V.

C. Node Selection Strategy Based on Conditional Probability
To evaluate the state of a node, a node selection strategy is employed, which relies on the conditional probability of each constraint mentioned earlier.For instance, the fusion variance probability value must fulfill the constraint C 6 to be considered in the node state estimation process.Likewise, a matrix can be formed in the following manner, with a value of 1 representing node selection and any other value indicating non-selection, prompting the non-selection case to restart the process.For more information please refer [24], [28].
Subsequently, node fitness probability is estimated as follows, where m indicates the number of constraints defined in the problem formulation section.
1) Vehicle Mobility Impact on Node Selection: The primary reason for the lack of success in service offloading objectives is the high mobility of vehicles, leading to data transmission failures or disturbances when the unusual server's execution time.Therefore, ensuring a short transmission time for the link has become significant, and the short transmission link time is derived as follows.
When the direction of vehicles is not the same, the transmission channel link quality where ϕ τ j,j+1|j,k is transmission link between jth vehicle, j + 1th vehicle or between jth vehicle, kth server over the network.Here, d j is the speed of jth vehicle, R indicates coverage range, j , j+1|k is the distance between vehicles and server, respectively; In this scenario, we select the offloading node based on a specific condition: the probability of service execution elapsed time being less than the probability of transmission link time.

D. Complexity Analysis
In the proposed mechanism, the calculation of each vehicle/node state, denoted as |S , is crucial since the selection of nodes relies on the probability of node dimension.Therefore, the probability amplitude of the node state must be equal to one, expressed as S s=1 | b s | 2 = 1.The evaluation involves assessing the suitable vehicle or node across its state spaces N , performing iterative search actions with l number of vehicles, q control qubits which can observe in Algorithms 1 and 2. As a result, the complexity of the proposed approach can be represented as O( √ N l I l • q log q), which encompasses weight summation, searching, and estimation of all parameters.
Space complexity: Typically, the required qubit storage for a node is proportional to the logarithm of the number of conditional qubits, denoted as O(log q).

V. EXPERIMENTAL RESULTS
In this Section, simulation results are plotted to determine the performance of our system.The Rigetti forest cloud platform and QUBIT4MATLAB V5.8 with MATLAB are used to evaluate every numerical simulation.The PyQuil package is used by Rigetti's forest cloud platform to simulate our method in a quantum virtual environment.H(): Hadamard gate, R(): Ration gate, T (): t-gate, I(): Identity gate have used in the simulation.Additionally, on 64-bit Ubuntu 20.1 LTS with an Intel Core i7-10700 CPU running at 3.80 GHz and an NVIDIA GeForce RTX3090, the Qiskit tool is utilized for cross-verification.The state-of-art (SOTA) approaches are labelled QAOA(A1), Q-Ebit(A2), and NDQ(A3), where A1 is a novel quantum service allocation model [16].A2 is a quantum-inspired communication and computation framework [9].A3, a quantum-based swarm optimization algorithm [8].
In our simulation, a backhaul link with a 10 Gb capacity connects c=3,6,9 RSUs and 31% of vehicles for efficient communication and processing.The MATLAB function generates Rayleigh fading channel value, and the channel bandwidth is fixed as 1000 Hz.There are eight actions and six states for 3-qubits, respectively.The full-duplex node deployment helps to preserve channel resource usage and communication delay, and the simulation variables are listed in Table IV.The compute service's anticipated storage size is 15 Mb, and the plots are built using 300 iterations of the logs and a CPU cycle per bit.Fig. 6 illustrates the state estimation analysis based on the dimensions of each qubit and mean distance error analysis.Quantum-based probability estimation helps to finalize the state of each vehicle.According to that, the value vehicle selection takes place, which impacts the system's performance in terms of low delay, resource usage, and quality of experience.To choose the RSU server by the application deadline, our two algorithms evaluate the node position using rotation angle and h-gate.Each dimension probability of a single sensor's qubit register is shown in Fig. 6(a).Moreover, the first two qubits have been assigned a higher priority than the third qubit using weight because the first two significantly impact the system performance.Consequently, the probability ratio is plotted for all four sensors as represented in Fig. 6(b).The first aim of this article is to assess the suitable position of the sensor to meet the qualitative coverage ratio and to cover the sensor service demands concerning distance estimation and angle measurement.Therefore, the mean distance error is assessed and plotted concerning dimensions and sensor count, which can be observed in Fig. 6(c).The proposed method achieved a lower error rate than SOTA approaches, even at different dimensional levels.Fig. 7 illustrates service rate analyses of the proposed method.We considered three different scenarios to assess the proposed system's performance.Usually, sensor density, service arrival rate, and coverage range affect the service reliability rate, and   three variables vary per the service demand to meet the application deadline.We noticed that, as the computation and communication capacity increased, the anchor count and the service execution ratio drastically increased, as can be observed from Fig. 7(a) to (c).
Fig. 8 illustrates the qubit dimension impact on system performance.Fig. 8(a) shows demanded time to complete each iteration concerning dimension count.The proposed system achieved a low consumption time which is an average of 1.1 k seconds.Consequently, A1, A2, and A3 achieved an average time computation rate.Fig. 8(b) shows the node-selection error rate based on dimension probability.In this case, the proposed model achieved low error rate than the rest of the methods since our state estimation method adequately measures the states and their probability for concluding the device selection process.In addition to that, fitness value is also estimated and considered before selecting the suitable sensor.The concerning fitness values of each sensor based on dimensions are plotted in Fig. 8(c).
Fig. 9 illustrates the error ratio concerning service coverage and resource usage analysis.The proposed system achieved a low normalized root mean square error of around 0.29% than SOTA approaches concerning the increase of hypernode count, which can be observed in Fig. 9(a).Deploying the sensor at the right position impacts the service reliability ratio, which mitigates the system's coverage issues.The proposed method achieved a high coverage ratio by fulfilling resource requirements with less communication and computation overhead, as shown in Fig. 9(b).In continuation, maintaining a trade-off between resource usage cost and latency minimization over joint computation and communication.The proposed method achieved low resource usage concerning the increment in hypernode count, which can be observed in Fig. 9(c).
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Two scenarios have been considered and corresponding results amended in Table V.The second case, specifically shows a deficient infrastructure, making it difficult to combine low error and latency rates with high service reliability.The creation of a novel node consolidation system by taking into account the qubit register probability and quantum machine learning mechanism led to the proposed's low latency and error rate.The fitness weight prediction function is crucial for maintaining a trade-off between resource usage, coverage ratio, and latency rate while choosing the best RSU.First, let us imagine that there are four servers in a CPS framework, each with a separate set of computational resources, and that server l b |rangle is overloaded as a result of a high service arrival rate.In this scenario, offloading takes place by node, which is chosen based on the node-centric characteristics stated in Table VI .
According to a set of measurements (node contiguity, node heterogeneity, and node optimal knack rate), including quantum state estimation and data fusion optimization, the red tuple offloads the services to the appropriate node in blue.In short, while accommodating the offloaded services, the targeted node service frequency, waiting time relative to the service length, and execution time are also crucial.The blue tuple was chosen because it may be resource-rich for computing.The system performance improves more with this selection strategy than with SOTA methods.

VI. CONCLUSION
This article develops a QONC decision-making system to select a feasible node to meet application latency constraints and to deploy the node at the appropriate position for accurate data computation-communication.The angular-based node position analysis method effectively localized the node position, which is essential to achieving low latency and enhancing system performance.The simulation results show that our method has achieved a low average error ratio from 0.17-0.22 and an average coverage ratio from 29%-42% because of estimating quantum-formalized node parameters (node contiguity, node optimal knack rate, node heterogeneity).The proposed model achieved 74.3% offloading reduction accuracy, and a 70.2% service reliability rate.The designed fitness function assesses the node fitness probability before selecting the potential device to execute the offloaded services.To satisfy the needs of applications with tight deadlines, we will expand on this work in the future by developing intelligent decision-making methods using linear programming and the Grovers algorithm.

VII. FUTURE OBJECTIVES
The advantage of designing a Quantum Reinforcement Learning (QRL) method over traditional RL is carried out by three strategies, 1) The equilibrium ratio between exploration and exploitation of actions will be achieved in QRL's exploration policy, which will be based on the collapse postulate rather than the greedy or Boltzmann policies.2) Designing and utilizing the unitary transformation method in a coordinated manner, the QRL model will update all of the states concurrently.
3) The robust nature of the QRL algorithm will help adapt to different learning rates quickly as the unknown changes in the environment.The robust nature of the QRL approach will be evaluated through learning performance and convergence rate analysis in certain simulations for lightweights and complex environments in the future.

Fig. 2 .
Fig. 2. Quadratic place of the sensor.(a) 4 phases of the sensor (b) rotation angle-based sensor localization.

Fig. 3 .
Fig. 3. Vehicle location measurement based on angle and reliability factor.

Theorem 1 :
Let us consider the assessment of the quantum state λ, assuming a total of n attributes.If χ = b[(O − O) T (O − O)] = b[(O pee ) T (O pee )] represents the data fusion variance O then the true data fusion probability

Fig. 5 .
Fig. 5.The node/vehicle state consolidation involves analyzing the probability variance through quantum-based data fusion.

Fig. 9 .
Fig. 9. Node density based on error ratio, resource usage, and service coverage analysis.

TABLE II RELATED
WORKS BRIEF IN LINE WITH ARTICLE OBJECTIVES normalized as |a| 2

TABLE IV THE
SIMULATION PARAMETERS INCLUDE COVERAGE RANGE (CR), TRANSMISSION POWER (TP), CHANNEL FREQUENCY (CF), AND THERMAL NOISE POWER (TNP), BANDWIDTH (B)