When atoms are accelerated in the vacuum,entanglement among atoms will degrade compared with the initial situation before the acceleration.In this study,we propose a novel and interesting view that the lost entangleme...When atoms are accelerated in the vacuum,entanglement among atoms will degrade compared with the initial situation before the acceleration.In this study,we propose a novel and interesting view that the lost entanglement can be recovered completely when the high-dimensional spacetime is exploited,in the case that the acceleration is not too large,since the entanglement loss rate caused by the large acceleration is faster than the recovery process.We also calculate the entanglement change caused by the anti-Unruh effect and found that the lost entanglement could just be recovered part by the anti-Unruh effect,and the anti-Unruh effect could only appear for a finite range of acceleration when the interaction time scale is approximately shorter than the reciprocal of the energy gap in two dimensional spacetime.The limit case of zero acceleration is also investigated,which gives an analytical interpretation for the increase or recovery of entanglement.展开更多
Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when ta...Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.展开更多
In a fractal zeta universe of bifurcated, ripped spacetime, the Millikan experiment, the quantum Hall effect, atmospheric clouds and universe clouds are shown to be self-similar with mass ratio of about 1020. Chaotic ...In a fractal zeta universe of bifurcated, ripped spacetime, the Millikan experiment, the quantum Hall effect, atmospheric clouds and universe clouds are shown to be self-similar with mass ratio of about 1020. Chaotic one-dimensional period-doublings as iterated hyperelliptic-elliptic curves are used to explain n-dim Kepler- and Coulomb singularities. The cosmic microwave background and cosmic rays are explained as bifurcated, ripped spacetime tensile forces. First iterated binary tree cloud cycles are related to emissions 1…1000 GHz. An interaction-independent universal vacuum density allows to predict large area correlated cosmic rays in quantum Hall experiments which would generate local nuclear disintegration stars, enhanced damage of layers and enhanced air ionization. A self-similarity between conductivity plateau and atmospheric clouds is extended to correlations in atmospheric layer, global temperature and climate.展开更多
In this work we study gravitational lensing of the wormhole in the Eddington-inspired Born–Infeld(EiBI)spacetime that incorporates with a cosmic string.It is found that the presence of a cosmic string can enhance the...In this work we study gravitational lensing of the wormhole in the Eddington-inspired Born–Infeld(EiBI)spacetime that incorporates with a cosmic string.It is found that the presence of a cosmic string can enhance the light deflection in the strong-field limit,compared to the Ellis–Bronnikov wormhole.The magnification effects of this composite structure could cause some substantial impacts on the angle separation between the first and the rest of the images,and their relative brightness.Furthermore,based on these observables,we model some observable aspects in the strong-and the weak-field limits.The presence of a cosmic string can affect some distinguishable observables compared to the wormhole without cosmic string.This work could deepen our understanding of the spacetime structure of the wormhole in EiBI spacetime with one-dimensional topological defects.展开更多
The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase scree...The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase screen imprinted with anisotropic non-Kolmogorov turbulence.The entanglement negativity and fidelity are introduced to quantify the entanglement of a high-dimensional OAM state.The numerical evaluation results indicate that entanglement negativity and fidelity last longer for a high-dimensional OAM state when the azimuthal mode has a lower value.Additionally,the evolution of higher-dimensional OAM entanglement is significantly influenced by OAM beam parameters and turbulence parameters.Compared to isotropic atmospheric turbulence,anisotropic turbulence has a lesser influence on highdimensional OAM entanglement.展开更多
In this paper,we present the second post-Newtonian solution for the quasi-Keplerian motion of a test particle in the regular Simpson–Visser black-bounce spacetime which has a bounce parameter a.The obtained solution ...In this paper,we present the second post-Newtonian solution for the quasi-Keplerian motion of a test particle in the regular Simpson–Visser black-bounce spacetime which has a bounce parameter a.The obtained solution is formulated in terms of orbital energy,angular momentum,and the bounce parameter of the black hole.We explicitly analyze the leading effects of the bounce parameter which has dimensions of length,on the test particle’s orbit,including the periastron advance and orbital period.Then,we apply this model to the precessing motion of OJ 287 and determine the upper limits of the dimensionless bounce parameter as a/m=3.45±0.01,where m is the mass of the regular black hole.Compared with the bound given by the periastron advance of star S2,our bound on a/m is reduced by one order of magnitude,although our upper limit of a still needs further improvement.展开更多
Based on the Many Worlds Interpretation,I describe reality as a multilayer spacetime,where parallel layers play the role of alternative timelines.I link physics to ethics,arguing that one’s moral choices shape one’s...Based on the Many Worlds Interpretation,I describe reality as a multilayer spacetime,where parallel layers play the role of alternative timelines.I link physics to ethics,arguing that one’s moral choices shape one’s course in the multiverse.I consider one’s ethical decisions as decoherence events,leading to movement between alternative timelines,lighter(higher)or heavier(lower)realities.Sometimes in one’s curvilinear path in spacetime,one can even experience falling toward lower layers,slipping through wormholes.This theory supports free will and the simulation hypothesis.With this background,I explore the idea that a new theory of gravity might open new possibilities to shape matter and change our worldview through the invention of new technology,transforming information into waves and then into solid matter,paving the way for a new Multiverse Aeon for humanity.展开更多
It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cy...It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cylinder.By constructing a bounded solution of the monotone recurrence relation,the main conclusion in this paper is acquired:The induced homeomorphism has Birkhoff orbits provided there is a compact forward-invariant set.Therefore,it generalizes Angenent's results in low-dimensional cases.展开更多
Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemio...Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.展开更多
Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from nume...Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.展开更多
The present paper is basically a synthesis resulting from incorporating Kerr spinning black hole geometry into E-infinity topology, then letting the result bares on the vacuum zero point Casimir effect as well as the ...The present paper is basically a synthesis resulting from incorporating Kerr spinning black hole geometry into E-infinity topology, then letting the result bares on the vacuum zero point Casimir effect as well as the cosmic dark energy and dark matter density. In E-infinity theory a quantum particle is represented by a Hausdorff dimension Φ where Φ =2/(√5+1) . The quantum wave on the other hand is represented by Φ2 . To be wave and a particle simultaneously intersection theory leads us to?(Φ) (Φ)2= Φ3 which will be shown here to be twice the value of the famous Casimir force of the vacuum for a massless scalar field. Thus in the present work a basically topological interpretation of the Casimir effect is given as a natural intrinsic property of the geometrical topological structure of the quantum-Cantorian micro spacetime. This new interpretation compliments the earlier conventional interpretation as vacuum fluctuation or as a Schwinger source and links the Casimir energy to the so called missing dark energy density of the cosmos. From the view point of the present work Casimir pressure is a local effect acting on the Casimir plates constituting the local boundary condition while dark energy is nothing but the global combined effect of infinitely many quantum waves acting on the Möbius-like boundary of the holographic boundary of the entire universe. Since this higher dimensional Möbius-like boundary is one sided, there is no outside to balance the internal collective Casimir pressure which then manifests itself as the force behind cosmic expansion, that is to say, dark energy. Thus analogous to the exact irrational value of ordinary energy density of spacetime E(O)=(Φ5/2) mc2 we now have P (Casimir) = (Φ3/2)(ch/d2) where c is the speed of light, m is the mass, h is the Planck constant and d is the plate separation. In addition the new emerging geometry combined with the topology of E-infinity theory leads directly to identifying dark matter with the quasi matter of the ergosphere. As a direct consequence of this new insight E=mc2 which can be written as E = E (O) + E (D)?where the exact rational approximation is E (O)=mc2/22 is?the ordinary energy density of the cosmos and the exact rational approximation E (D)=mc2/(21/22) is the corresponding dark energy which could be subdivided once more albeit truly approximately into E(D)=mc2/(5/22)?+mc2/(16/22)??where 5 is the Kaluza Klein spacetime dimension, 16 are the bosonic extra dimensions of Heterotic superstrings and 5/22 □?22% is approximately the density of the dark matter-like energy of the ergosphere of the Kerr geometry. As for the actual design of our nano reactor, this is closely related to branching clusters of polymer, frequently called lattice animals. In other words we will have Casimir spheres instead of Casimir plates and these spheres will be basically nano particles modelling lattice animals. Here D=?4 will be regarded as spacetime dimensionality while D=6 of percolations are the compactified super string dimensions and D=8 is the dimension of a corresponding super space.展开更多
Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for...Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.展开更多
The paper presents a very simple and straight forward yet pure mathematical derivation of the structure of actual spacetime from quantum set theory. This is achieved by utilizing elements of the topological theory of ...The paper presents a very simple and straight forward yet pure mathematical derivation of the structure of actual spacetime from quantum set theory. This is achieved by utilizing elements of the topological theory of cobordism and the Menger-Urysohn dimensional theory in conjunction with von Neumann-Connes dimensional function of Klein-Penrose modular holographic boundary of the E8E8 exceptional Lie group bulk of our universe. The final result is a lucid sharp mental picture, namely that the quantum wave is an empty set representing the surface, i.e. boundary of the zero set quantum particle and in turn quantum spacetime is simply the boundary or the surface of the quantum wave empty set. The essential difference of the quantum wave and quantum spacetime is that the wave is a simple empty set while spacetime is a multi-fractal type of infinitely many empty sets with increasing degrees of emptiness.展开更多
We provide a new class of interior solution of a(2+1)-dimensional anisotropic star in Finch and Skea spacetime corresponding to the BTZ black hole. We develop the model by considering the MIT bag model EOS and a parti...We provide a new class of interior solution of a(2+1)-dimensional anisotropic star in Finch and Skea spacetime corresponding to the BTZ black hole. We develop the model by considering the MIT bag model EOS and a particular ansatz for the metric function grrproposed by Finch and Skea [M.R. Finch and J.E.F. Skea, Class. Quantum.Grav. 6(1989) 467]. Our model is free from central singularity and satisfies all the physical requirements for the acceptability of the model.展开更多
Some properties related to the NUT-Taub-like spacetime, such as the surface of infinite red-shift, horizon, singularity and the area of the NUT-Taub-like black hole are discussed. Furthermore, the geodesics in the NUT...Some properties related to the NUT-Taub-like spacetime, such as the surface of infinite red-shift, horizon, singularity and the area of the NUT-Taub-like black hole are discussed. Furthermore, the geodesics in the NUT-Taub-like spacetime are obtained in some special cases. Specifically, the circular orbits for a massive particle are derived, which can reduce to the cases of the Schwarzschild spacetime and the NUT-Taub spacetime when m^* = 0 and m^* 〈〈 M, respectively.展开更多
We use a dual Einstein-Kaluza spacetime to calculate the exact energy density of dark energy and dark matter using a novel topological computation method. Starting from the said spacetime and ‘tHooft’s topological r...We use a dual Einstein-Kaluza spacetime to calculate the exact energy density of dark energy and dark matter using a novel topological computation method. Starting from the said spacetime and ‘tHooft’s topological renormalon as well as the corresponding symmetry group, we show how the zero set quantum particle and the empty set quantum wave interact with the vacuum and give rise to pure dark energy and pure dark matter all along with ordinary energy density of the cosmos. The consistency of the exact calculation and the accurate observations attests to the reality of ‘tHooft’s renormalon dark matter, pure dark energy and accelerated cosmic expansion.展开更多
Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high paralleliz...Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing.展开更多
Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dim...Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.展开更多
Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterativ...Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost.Hence,determining how to accelerate the training process for LF models has become a significant issue.To address this,this work proposes a randomized latent factor(RLF)model.It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices,thereby greatly alleviating computational burden.It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models,RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices,which is especially desired for industrial applications demanding highly efficient models.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.12375057,11947301,and 12047502)the Fundamental Research Funds for the Central UniversitiesChina University of Geosciences(Wuhan)(Grant No.G1323523064)。
文摘When atoms are accelerated in the vacuum,entanglement among atoms will degrade compared with the initial situation before the acceleration.In this study,we propose a novel and interesting view that the lost entanglement can be recovered completely when the high-dimensional spacetime is exploited,in the case that the acceleration is not too large,since the entanglement loss rate caused by the large acceleration is faster than the recovery process.We also calculate the entanglement change caused by the anti-Unruh effect and found that the lost entanglement could just be recovered part by the anti-Unruh effect,and the anti-Unruh effect could only appear for a finite range of acceleration when the interaction time scale is approximately shorter than the reciprocal of the energy gap in two dimensional spacetime.The limit case of zero acceleration is also investigated,which gives an analytical interpretation for the increase or recovery of entanglement.
基金funded by National Natural Science Foundation of China(Nos.12402142,11832013 and 11572134)Natural Science Foundation of Hubei Province(No.2024AFB235)+1 种基金Hubei Provincial Department of Education Science and Technology Research Project(No.Q20221714)the Opening Foundation of Hubei Key Laboratory of Digital Textile Equipment(Nos.DTL2023019 and DTL2022012).
文摘Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.
文摘In a fractal zeta universe of bifurcated, ripped spacetime, the Millikan experiment, the quantum Hall effect, atmospheric clouds and universe clouds are shown to be self-similar with mass ratio of about 1020. Chaotic one-dimensional period-doublings as iterated hyperelliptic-elliptic curves are used to explain n-dim Kepler- and Coulomb singularities. The cosmic microwave background and cosmic rays are explained as bifurcated, ripped spacetime tensile forces. First iterated binary tree cloud cycles are related to emissions 1…1000 GHz. An interaction-independent universal vacuum density allows to predict large area correlated cosmic rays in quantum Hall experiments which would generate local nuclear disintegration stars, enhanced damage of layers and enhanced air ionization. A self-similarity between conductivity plateau and atmospheric clouds is extended to correlations in atmospheric layer, global temperature and climate.
基金supported by the Youth Program of the Natural Science Foundation of Guangxi(Grant No.2021GXNSFBA075049)the Doctor Start-up Foundation of Guangxi University of Science and Technology(Grant No.19Z21)+3 种基金supported by the National Natural Science Foundation of China(Grant No.12165009)supported by the National Natural Science Foundation of China(Grant No.11865005)Hunan Natural Provincial Science Foundation(Grant No.2023JJ30487)supported by the starting Foundation of Guangxi University of Science and Technology(Grant No.24Z17)。
文摘In this work we study gravitational lensing of the wormhole in the Eddington-inspired Born–Infeld(EiBI)spacetime that incorporates with a cosmic string.It is found that the presence of a cosmic string can enhance the light deflection in the strong-field limit,compared to the Ellis–Bronnikov wormhole.The magnification effects of this composite structure could cause some substantial impacts on the angle separation between the first and the rest of the images,and their relative brightness.Furthermore,based on these observables,we model some observable aspects in the strong-and the weak-field limits.The presence of a cosmic string can affect some distinguishable observables compared to the wormhole without cosmic string.This work could deepen our understanding of the spacetime structure of the wormhole in EiBI spacetime with one-dimensional topological defects.
基金supported by the Project of the Hubei Provincial Department of Science and Technology(Grant Nos.2022CFB957,2022CFB475)the National Natural Science Foundation of China(Grant No.11847118)。
文摘The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase screen imprinted with anisotropic non-Kolmogorov turbulence.The entanglement negativity and fidelity are introduced to quantify the entanglement of a high-dimensional OAM state.The numerical evaluation results indicate that entanglement negativity and fidelity last longer for a high-dimensional OAM state when the azimuthal mode has a lower value.Additionally,the evolution of higher-dimensional OAM entanglement is significantly influenced by OAM beam parameters and turbulence parameters.Compared to isotropic atmospheric turbulence,anisotropic turbulence has a lesser influence on highdimensional OAM entanglement.
基金supported by the National Natural Science Foundation of China(Grant Nos.12303079,12481540180 and 12475057)the support of the postdoctoral program of purple Mountain Observatory,Chinese Academy of Sciences。
文摘In this paper,we present the second post-Newtonian solution for the quasi-Keplerian motion of a test particle in the regular Simpson–Visser black-bounce spacetime which has a bounce parameter a.The obtained solution is formulated in terms of orbital energy,angular momentum,and the bounce parameter of the black hole.We explicitly analyze the leading effects of the bounce parameter which has dimensions of length,on the test particle’s orbit,including the periastron advance and orbital period.Then,we apply this model to the precessing motion of OJ 287 and determine the upper limits of the dimensionless bounce parameter as a/m=3.45±0.01,where m is the mass of the regular black hole.Compared with the bound given by the periastron advance of star S2,our bound on a/m is reduced by one order of magnitude,although our upper limit of a still needs further improvement.
文摘Based on the Many Worlds Interpretation,I describe reality as a multilayer spacetime,where parallel layers play the role of alternative timelines.I link physics to ethics,arguing that one’s moral choices shape one’s course in the multiverse.I consider one’s ethical decisions as decoherence events,leading to movement between alternative timelines,lighter(higher)or heavier(lower)realities.Sometimes in one’s curvilinear path in spacetime,one can even experience falling toward lower layers,slipping through wormholes.This theory supports free will and the simulation hypothesis.With this background,I explore the idea that a new theory of gravity might open new possibilities to shape matter and change our worldview through the invention of new technology,transforming information into waves and then into solid matter,paving the way for a new Multiverse Aeon for humanity.
基金Supported by the National Natural Science Foundation of China(12201446)the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(22KJB110005)the Shuangchuang Program of Jiangsu Province(JSSCBS20220898)。
文摘It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cylinder.By constructing a bounded solution of the monotone recurrence relation,the main conclusion in this paper is acquired:The induced homeomorphism has Birkhoff orbits provided there is a compact forward-invariant set.Therefore,it generalizes Angenent's results in low-dimensional cases.
基金supported in part by the Young Scientists Fund of the National Natural Science Foundation of China(Grant Nos.82304253)(and 82273709)the Foundation for Young Talents in Higher Education of Guangdong Province(Grant No.2022KQNCX021)the PhD Starting Project of Guangdong Medical University(Grant No.GDMUB2022054).
文摘Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.
基金supported by Fundamental Research Program of Shanxi Province(Nos.202203021211088,202403021212254,202403021221109)Graduate Research Innovation Project in Shanxi Province(No.2024KY616).
文摘Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.
文摘The present paper is basically a synthesis resulting from incorporating Kerr spinning black hole geometry into E-infinity topology, then letting the result bares on the vacuum zero point Casimir effect as well as the cosmic dark energy and dark matter density. In E-infinity theory a quantum particle is represented by a Hausdorff dimension Φ where Φ =2/(√5+1) . The quantum wave on the other hand is represented by Φ2 . To be wave and a particle simultaneously intersection theory leads us to?(Φ) (Φ)2= Φ3 which will be shown here to be twice the value of the famous Casimir force of the vacuum for a massless scalar field. Thus in the present work a basically topological interpretation of the Casimir effect is given as a natural intrinsic property of the geometrical topological structure of the quantum-Cantorian micro spacetime. This new interpretation compliments the earlier conventional interpretation as vacuum fluctuation or as a Schwinger source and links the Casimir energy to the so called missing dark energy density of the cosmos. From the view point of the present work Casimir pressure is a local effect acting on the Casimir plates constituting the local boundary condition while dark energy is nothing but the global combined effect of infinitely many quantum waves acting on the Möbius-like boundary of the holographic boundary of the entire universe. Since this higher dimensional Möbius-like boundary is one sided, there is no outside to balance the internal collective Casimir pressure which then manifests itself as the force behind cosmic expansion, that is to say, dark energy. Thus analogous to the exact irrational value of ordinary energy density of spacetime E(O)=(Φ5/2) mc2 we now have P (Casimir) = (Φ3/2)(ch/d2) where c is the speed of light, m is the mass, h is the Planck constant and d is the plate separation. In addition the new emerging geometry combined with the topology of E-infinity theory leads directly to identifying dark matter with the quasi matter of the ergosphere. As a direct consequence of this new insight E=mc2 which can be written as E = E (O) + E (D)?where the exact rational approximation is E (O)=mc2/22 is?the ordinary energy density of the cosmos and the exact rational approximation E (D)=mc2/(21/22) is the corresponding dark energy which could be subdivided once more albeit truly approximately into E(D)=mc2/(5/22)?+mc2/(16/22)??where 5 is the Kaluza Klein spacetime dimension, 16 are the bosonic extra dimensions of Heterotic superstrings and 5/22 □?22% is approximately the density of the dark matter-like energy of the ergosphere of the Kerr geometry. As for the actual design of our nano reactor, this is closely related to branching clusters of polymer, frequently called lattice animals. In other words we will have Casimir spheres instead of Casimir plates and these spheres will be basically nano particles modelling lattice animals. Here D=?4 will be regarded as spacetime dimensionality while D=6 of percolations are the compactified super string dimensions and D=8 is the dimension of a corresponding super space.
基金supported by Shaanxi Province Natural Science Foundation of Research Projects(2016JM6014)the Innovation Foundation of High-Tech Institute of Xi’an(2015ZZDJJ03)the Youth Foundation of HighTech Institute of Xi’an(2016QNJJ004)
文摘Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.
文摘The paper presents a very simple and straight forward yet pure mathematical derivation of the structure of actual spacetime from quantum set theory. This is achieved by utilizing elements of the topological theory of cobordism and the Menger-Urysohn dimensional theory in conjunction with von Neumann-Connes dimensional function of Klein-Penrose modular holographic boundary of the E8E8 exceptional Lie group bulk of our universe. The final result is a lucid sharp mental picture, namely that the quantum wave is an empty set representing the surface, i.e. boundary of the zero set quantum particle and in turn quantum spacetime is simply the boundary or the surface of the quantum wave empty set. The essential difference of the quantum wave and quantum spacetime is that the wave is a simple empty set while spacetime is a multi-fractal type of infinitely many empty sets with increasing degrees of emptiness.
文摘We provide a new class of interior solution of a(2+1)-dimensional anisotropic star in Finch and Skea spacetime corresponding to the BTZ black hole. We develop the model by considering the MIT bag model EOS and a particular ansatz for the metric function grrproposed by Finch and Skea [M.R. Finch and J.E.F. Skea, Class. Quantum.Grav. 6(1989) 467]. Our model is free from central singularity and satisfies all the physical requirements for the acceptability of the model.
基金Project supported by the National Natural Science Foundation of China (Grant No 10475036), the Natural Science Foundation of Liaoning Province, China (Grant No 20032012) and the Scientific Research Foundation of the Higher Education Institute of Liaoning Province, China (Grant No 05L215).
文摘Some properties related to the NUT-Taub-like spacetime, such as the surface of infinite red-shift, horizon, singularity and the area of the NUT-Taub-like black hole are discussed. Furthermore, the geodesics in the NUT-Taub-like spacetime are obtained in some special cases. Specifically, the circular orbits for a massive particle are derived, which can reduce to the cases of the Schwarzschild spacetime and the NUT-Taub spacetime when m^* = 0 and m^* 〈〈 M, respectively.
文摘We use a dual Einstein-Kaluza spacetime to calculate the exact energy density of dark energy and dark matter using a novel topological computation method. Starting from the said spacetime and ‘tHooft’s topological renormalon as well as the corresponding symmetry group, we show how the zero set quantum particle and the empty set quantum wave interact with the vacuum and give rise to pure dark energy and pure dark matter all along with ordinary energy density of the cosmos. The consistency of the exact calculation and the accurate observations attests to the reality of ‘tHooft’s renormalon dark matter, pure dark energy and accelerated cosmic expansion.
基金Peng Xie acknowledges the support from the China Scholarship Council(Grant no.201804910829).
文摘Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing.
基金supported by the National Natural Science Foundations of China(Nos.51205193,51475221)
文摘Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.
基金supported in part by the National Natural Science Foundation of China (6177249391646114)+1 种基金Chongqing research program of technology innovation and application (cstc2017rgzn-zdyfX0020)in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciences
文摘Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost.Hence,determining how to accelerate the training process for LF models has become a significant issue.To address this,this work proposes a randomized latent factor(RLF)model.It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices,thereby greatly alleviating computational burden.It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models,RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices,which is especially desired for industrial applications demanding highly efficient models.