With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for opti...With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.展开更多
The convergence of large language models(LLMs)and virtual reality(VR)technologies has led to significant breakthroughs across multiple domains,particularly in healthcare and medicine.Owing to its immersive and interac...The convergence of large language models(LLMs)and virtual reality(VR)technologies has led to significant breakthroughs across multiple domains,particularly in healthcare and medicine.Owing to its immersive and interactive capabilities,VR technology has demonstrated exceptional utility in surgical simulation,rehabilitation,physical therapy,mental health,and psychological treatment.By creating highly realistic and precisely controlled environments,VR not only enhances the efficiency of medical training but also enables personalized therapeutic approaches for patients.The convergence of LLMs and VR extends the potential of both technologies.LLM-empowered VR can transform medical education through interactive learning platforms and address complex healthcare challenges using comprehensive solutions.This convergence enhances the quality of training,decision-making,and patient engagement,paving the way for innovative healthcare delivery.This study aims to comprehensively review the current applications,research advancements,and challenges associated with these two technologies in healthcare and medicine.The rapid evolution of these technologies is driving the healthcare industry toward greater intelligence and precision,establishing them as critical forces in the transformation of modern medicine.展开更多
Software Process Workshop (SPW 2005) was held in Beijing on May 25-27, 2005. This paper introduces the motivation of organizing such a workshop, as well as its theme and paper gathering and review; and summarizes the ...Software Process Workshop (SPW 2005) was held in Beijing on May 25-27, 2005. This paper introduces the motivation of organizing such a workshop, as well as its theme and paper gathering and review; and summarizes the main content and insights of 11 keynote speeches, 30 regular papers in five sessions of “Process Content”, “Process Tools and Metrics”, “Process Management”, “Process Representation and Analysis”, and “Experience Reports”, 8 software development support tools demonstration, and the ending panel “Where Are We Now? Where Should We Go Next?”.展开更多
Software configuration testing is used to test a piece of software with all kinds of hardware to ensure that it can run properly on them.This paper generates test cases for configuration testing with several common me...Software configuration testing is used to test a piece of software with all kinds of hardware to ensure that it can run properly on them.This paper generates test cases for configuration testing with several common methods,such as multiple single-factor experiments,uniform design,and orthogonal experiment design used in other fields.This paper analyzes their merits and improves the orthogonal experiment design method with pairwise testing,and decreases the testing risk caused by incomplete testing with a method of multiple-factors-covering.It presents a simple factor cover method which can cover all the factors and pairwise combinations to the greatest degree.Some comparisons of these methods are made on the aspects of test suite scale,coverage,and usability,etc..展开更多
When applying Software-Defined Networks(SDN) to WANs,the SDN flexibility enables the cross-domain control to achieve a better control scalability.However,the control consistence is required by all the cross-domain ser...When applying Software-Defined Networks(SDN) to WANs,the SDN flexibility enables the cross-domain control to achieve a better control scalability.However,the control consistence is required by all the cross-domain services,to ensure the data plane configured in consensus for different domains.Such consistence process is complicated by potential failure and errors of WANs.In this paper,we propose a consistence layer to actively and passively snapshot the cross-domain control states,to reduce the complexities of service realizations.We implement the layer and evaluate performance in the PlanetLab testbed for the WAN emulation.The testbed conditions are extremely enlarged comparing to the real network.The results show its scalability,reliability and responsiveness in dealing with the control dynamics.In the normalized results,the active and passive snapshots are executed with the mean times of 1.873 s and 105 ms in135 controllers,indicating its readiness to be used in the real network.展开更多
Hardware/software(HW/SW) partitioning is one of the key processes in an embedded system.It is used to determine which system components are assigned to hardware and which are processed by software.In contrast with p...Hardware/software(HW/SW) partitioning is one of the key processes in an embedded system.It is used to determine which system components are assigned to hardware and which are processed by software.In contrast with previous research that focuses on developing efficient heuristic,we focus on the pre-process of the task graph before the HW/SW partitioning in this paper,that is,enumerating all the sub-graphs that meet the requirements.Experimental results showed that the original graph can be reduced to 67% in the worst-case scenario and 58% in the best-case scenario.In conclusion,the reduced task graph saved hardware area while improving partitioning speed and accuracy.展开更多
This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of progra...This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students.展开更多
This study proposes an image-based three-dimensional(3D)vector reconstruction of industrial parts that can gener-ate non-uniform rational B-splines(NURBS)surfaces with high fidelity and flexibility.The contributions o...This study proposes an image-based three-dimensional(3D)vector reconstruction of industrial parts that can gener-ate non-uniform rational B-splines(NURBS)surfaces with high fidelity and flexibility.The contributions of this study include three parts:first,a dataset of two-dimensional images is constructed for typical industrial parts,including hex-agonal head bolts,cylindrical gears,shoulder rings,hexagonal nuts,and cylindrical roller bearings;second,a deep learning algorithm is developed for parameter extraction of 3D industrial parts,which can determine the final 3D parameters and pose information of the reconstructed model using two new nets,CAD-ClassNet and CAD-ReconNet;and finally,a 3D vector shape reconstruction of mechanical parts is presented to generate NURBS from the obtained shape parameters.The final reconstructed models show that the proposed approach is highly accurate,efficient,and practical.展开更多
Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces...Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.展开更多
Background The sense of touch plays a crucial role in interactive behavior within virtual spaces,particularly when visual attention is absent.Although haptic feedback has been widely used to compensate for the lack of...Background The sense of touch plays a crucial role in interactive behavior within virtual spaces,particularly when visual attention is absent.Although haptic feedback has been widely used to compensate for the lack of visual cues,the use of tactile information as a predictive feedforward cue to guide hand movements remains unexplored and lacks theoretical understanding.Methods This study introduces a fingertip aero-haptic rendering method to investigate its effectiveness in directing hand movements during eyes-free spatial interactions.The wearable device incorporates a multichannel micro-airflow chamber to deliver adjustable tactile effects on the fingertips.Results The first study verified that tactile directional feedforward cues significantly improve user capabilities in eyes-free target acquisition and that users rely heavily on haptic indications rather than spatial memory to control their hands.A subsequent study examined the impact of enriched tactile feedforward cues on assisting users in determining precise target positions during eyes-free interactions,and assessed the required learning efforts.Conclusions The haptic feedforward effect holds great practical promise in eyeless design for virtual reality.We aim to integrate cognitive models and tactile feedforward cues in the future,and apply richer tactile feedforward information to alleviate users'perceptual deficiencies.展开更多
Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer ...Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface.展开更多
INTRODUCTION.Crustal velocity model is crucial for describing the subsurface composition and structure,and has significant implications in offshore oil and gas exploration and marine geophysical engineering(Xie et al....INTRODUCTION.Crustal velocity model is crucial for describing the subsurface composition and structure,and has significant implications in offshore oil and gas exploration and marine geophysical engineering(Xie et al.,2024).Currently,travel time tomography is the most commonly used method for velocity modeling based on ocean bottom seismometer(OBS)data(Zhang et al.,2023;Sambolian et al.,2021).This method usually assumes that the sub-seafloor structure is layered,and therefore faces challenges in high-precision modeling with strong lateral discontinuities.展开更多
Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecas...Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecasting.However,existing deep learning models frequently overlook the selective utilization of information from other production wells,resulting in suboptimal performance in long-term production forecasting across multiple wells.To achieve accurate long-term petroleum production forecast,we propose a spatial-geological perception graph convolutional neural network(SGP-GCN)that accounts for the temporal,spatial,and geological dependencies inherent in petroleum production.Utilizing the attention mechanism,the SGP-GCN effectively captures intricate correlations within production and geological data,forming the representations of each production well.Based on the spatial distances and geological feature correlations,we construct a spatial-geological matrix as the weight matrix to enable differential utilization of information from other wells.Additionally,a matrix sparsification algorithm based on production clustering(SPC)is also proposed to optimize the weight distribution within the spatial-geological matrix,thereby enhancing long-term forecasting performance.Empirical evaluations have shown that the SGP-GCN outperforms existing deep learning models,such as CNN-LSTM-SA,in long-term petroleum production forecasting.This demonstrates the potential of the SGP-GCN as a valuable tool for long-term petroleum production forecasting across multiple wells.展开更多
Coalbed methane(CBM)is a vital unconventional energy resource,and predicting its spatiotemporal pressure dynamics is crucial for efficient development strategies.This paper proposes a novel deep learningebased data-dr...Coalbed methane(CBM)is a vital unconventional energy resource,and predicting its spatiotemporal pressure dynamics is crucial for efficient development strategies.This paper proposes a novel deep learningebased data-driven surrogate model,AxialViT-ConvLSTM,which integrates AxialAttention Vision Transformer,ConvLSTM,and an enhanced loss function to predict pressure dynamics in CBM reservoirs.The results showed that the model achieves a mean square error of 0.003,a learned perceptual image patch similarity of 0.037,a structural similarity of 0.979,and an R^(2) of 0.982 between predictions and actual pressures,indicating excellent performance.The model also demonstrates strong robustness and accuracy in capturing spatialetemporal pressure features.展开更多
The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggl...The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggle with this aspect.A potential reason is that the benchmarks used for training and evaluation may not adequately offer a diverse set of transferable tasks.Although recent studies have developed bench-marking environments to address this shortcoming,they typically fall short in providing tasks that both ensure a solid foundation for generalization and exhibit significant variability.To overcome these limitations,this work introduces the concept that‘objects are composed of more fundamental components’in environment design,as implemented in the proposed environment called summon the magic(StM).This environment generates tasks where objects are derived from extensible and shareable basic components,facilitating strategy reuse and enhancing generalization.Furthermore,two new metrics,adaptation sensitivity range(ASR)and parameter correlation coefficient(PCC),are proposed to better capture and evaluate the generalization process of RL agents.Experimental results show that increasing the number of basic components of the object reduces the proximal policy optimization(PPO)agent’s training-testing gap by 60.9%(in episode reward),significantly alleviating overfitting.Additionally,linear variations in other environmental factors,such as the training monster set proportion and the total number of basic components,uniformly decrease the gap by at least 32.1%.These results highlight StM’s effectiveness in benchmarking and probing the generalization capabilities of RL algorithms.展开更多
As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitati...As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitating data-driven decision making,fostering innovation ecosystems,and maintaining operational stability.In this study,we propose an advanced deployment algorithm for Service Function Chaining(SFC)that leverages an enhanced Practical Byzantine Fault Tolerance(PBFT)mechanism.The main goal is to tackle the issues of security and resource efficiency in SFC implementation across diverse network settings.By integrating blockchain technology and Deep Reinforcement Learning(DRL),our algorithm not only optimizes resource utilization and quality of service but also ensures robust security during SFC deployment.Specifically,the enhanced PBFT consensus mechanism(VRPBFT)significantly reduces consensus latency and improves Byzantine node detection through the introduction of a Verifiable Random Function(VRF)and a node reputation grading model.Experimental results demonstrate that compared to traditional PBFT,the proposed VRPBFT algorithm reduces consensus latency by approximately 30%and decreases the proportion of Byzantine nodes by 40%after 100 rounds of consensus.Furthermore,the DRL-based SFC deployment algorithm(SDRL)exhibits rapid convergence during training,with improvements in long-term average revenue,request acceptance rate,and revenue/cost ratio of 17%,14.49%,and 20.35%,respectively,over existing algorithms.Additionally,the CPU resource utilization of the SDRL algorithmreaches up to 42%,which is 27.96%higher than other algorithms.These findings indicate that the proposed algorithm substantially enhances resource utilization efficiency,service quality,and security in SFC deployment.展开更多
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities...To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.展开更多
Satellite communication systems provide a cost-effective solution for global internet of things(IoT)applications due to its large coverage and easy deployment.This paper mainly focuses on Satellite networks system,in ...Satellite communication systems provide a cost-effective solution for global internet of things(IoT)applications due to its large coverage and easy deployment.This paper mainly focuses on Satellite networks system,in which low earth orbit(LEO)satellites network collect sensing data from the user terminals(UTs)and then forward the data to ground station through geostationary earth orbit(GEO)satellites network.Considering the limited uplink transmission resources,this paper optimizes the uplink transmission scheduling scheme over LEO satellites.A novel transmission scheduling algorithm,which combined Algorithms of Simulated Annealing and Monte Carlo(SA-MC),is proposed to achieve the dynamic optimal scheduling scheme.Simulation results show the effectiveness of the proposed SA-MC algorithm in terms of cost value reduction and fast convergence.展开更多
OBJECTIVE:To explore the correlation between diagnostic information of tongue and gastroscopy results of patients with chronic gastritis.METHODS:Frequent pattern growth(FP-Growth),SPSS Modeler was used to analyze the ...OBJECTIVE:To explore the correlation between diagnostic information of tongue and gastroscopy results of patients with chronic gastritis.METHODS:Frequent pattern growth(FP-Growth),SPSS Modeler was used to analyze the correlation rules between the image information of tongue parameters and the characteristics of the stomach and duodenum seen under gastroscopy.RESULTS:Ranking in order of confidence:cyanotic tongue,slippery fur,yellow fur and spotted tongue were sequently associated with both gastric antrum mucosal hyperemia or edema and gastric antrum mucosal erythema/macula.L,one value of tongue coating color,which counted among(30,60),tooth-marked tongue and b,one value of tongue coating color,which counted in the range of(5,20)were sequently associated with gastric antrum mucosal erythema/macula.A,one value of tongue body color,which counted in the range of(0,20),was related to both gastric antrum mucosal hyperemia or edema and gastric antrum mucosal erythema/macula.a,one value of tongue coating color,which counted in the range of(15,35),was associated with gastric antrum mucosal erythema/macula.There are a total of 9 strong correlation rules.CONCLUSIONS:Cyanotic tongue,slippery fur,yellow fur,the CIE Lab value of tongue coating,a,the value of tongue body color,spotted tongue,and tooth-marked tongue are all related to the gastric antrum mucosal hyperemia or edema and gastric antrum mucosal erythema/macula.The conditions of gastric mucosa could be predicted by the examination of the above related image information of tongue.展开更多
基金supported by the National Natural Science Foundation of China(NSFC)[Grant No.62072469].
文摘With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.
基金Supported by Noncommunicable Chronic Diseases-National Science and Technology Major Project(2024ZD0523200)National Natural Science Foundation of China(62301330,62101346).
文摘The convergence of large language models(LLMs)and virtual reality(VR)technologies has led to significant breakthroughs across multiple domains,particularly in healthcare and medicine.Owing to its immersive and interactive capabilities,VR technology has demonstrated exceptional utility in surgical simulation,rehabilitation,physical therapy,mental health,and psychological treatment.By creating highly realistic and precisely controlled environments,VR not only enhances the efficiency of medical training but also enables personalized therapeutic approaches for patients.The convergence of LLMs and VR extends the potential of both technologies.LLM-empowered VR can transform medical education through interactive learning platforms and address complex healthcare challenges using comprehensive solutions.This convergence enhances the quality of training,decision-making,and patient engagement,paving the way for innovative healthcare delivery.This study aims to comprehensively review the current applications,research advancements,and challenges associated with these two technologies in healthcare and medicine.The rapid evolution of these technologies is driving the healthcare industry toward greater intelligence and precision,establishing them as critical forces in the transformation of modern medicine.
文摘Software Process Workshop (SPW 2005) was held in Beijing on May 25-27, 2005. This paper introduces the motivation of organizing such a workshop, as well as its theme and paper gathering and review; and summarizes the main content and insights of 11 keynote speeches, 30 regular papers in five sessions of “Process Content”, “Process Tools and Metrics”, “Process Management”, “Process Representation and Analysis”, and “Experience Reports”, 8 software development support tools demonstration, and the ending panel “Where Are We Now? Where Should We Go Next?”.
基金The Natronal Natural Science Foundation of China(No.60373066)Opening Foundation of State Key Laboratory of Software Engineering in Wuhan UniversityOpening Foundation ofJiangsu Key Laboratory of Computer Information ProcessingTechnology in Soochow University.
文摘Software configuration testing is used to test a piece of software with all kinds of hardware to ensure that it can run properly on them.This paper generates test cases for configuration testing with several common methods,such as multiple single-factor experiments,uniform design,and orthogonal experiment design used in other fields.This paper analyzes their merits and improves the orthogonal experiment design method with pairwise testing,and decreases the testing risk caused by incomplete testing with a method of multiple-factors-covering.It presents a simple factor cover method which can cover all the factors and pairwise combinations to the greatest degree.Some comparisons of these methods are made on the aspects of test suite scale,coverage,and usability,etc..
基金supported by the National Basic Research Program of China (2012CB315903)the Program for Key Science and Technology Innovation Team of Zhejiang Province(2011R50010,2013TD20)+3 种基金the National High Technology Research Program of China(2015AA016103)the National Natural Science Foundation of China(61379118)the Research Fund of ZTE CorporationJiaxing Science and Technology Project (No.2014AY21021)
文摘When applying Software-Defined Networks(SDN) to WANs,the SDN flexibility enables the cross-domain control to achieve a better control scalability.However,the control consistence is required by all the cross-domain services,to ensure the data plane configured in consensus for different domains.Such consistence process is complicated by potential failure and errors of WANs.In this paper,we propose a consistence layer to actively and passively snapshot the cross-domain control states,to reduce the complexities of service realizations.We implement the layer and evaluate performance in the PlanetLab testbed for the WAN emulation.The testbed conditions are extremely enlarged comparing to the real network.The results show its scalability,reliability and responsiveness in dealing with the control dynamics.In the normalized results,the active and passive snapshots are executed with the mean times of 1.873 s and 105 ms in135 controllers,indicating its readiness to be used in the real network.
基金Supported by the National Natural Science Foundation of China (60970016,61173032)
文摘Hardware/software(HW/SW) partitioning is one of the key processes in an embedded system.It is used to determine which system components are assigned to hardware and which are processed by software.In contrast with previous research that focuses on developing efficient heuristic,we focus on the pre-process of the task graph before the HW/SW partitioning in this paper,that is,enumerating all the sub-graphs that meet the requirements.Experimental results showed that the original graph can be reduced to 67% in the worst-case scenario and 58% in the best-case scenario.In conclusion,the reduced task graph saved hardware area while improving partitioning speed and accuracy.
文摘This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students.
基金This work is supported by the National Natural Science Foundation of China (No. 60073020), the University Natural Science Foundation of Jiangsu Province of China (No. 05KJB520119) and the Natural Science Foundation Project of Chongqing (No. CSTC2006BB2259).
基金supported by the Aeronautical Science Foundation of China,No.2023Z0680510022021 Special Scientific Research on Civil Aircraft Project+1 种基金the Natural Science Foundation of China,Nos.61572056 and 61872347the Special Plan for the Development of Distinguished Young Scientists of ISCAS,No.Y8RC535018.
文摘This study proposes an image-based three-dimensional(3D)vector reconstruction of industrial parts that can gener-ate non-uniform rational B-splines(NURBS)surfaces with high fidelity and flexibility.The contributions of this study include three parts:first,a dataset of two-dimensional images is constructed for typical industrial parts,including hex-agonal head bolts,cylindrical gears,shoulder rings,hexagonal nuts,and cylindrical roller bearings;second,a deep learning algorithm is developed for parameter extraction of 3D industrial parts,which can determine the final 3D parameters and pose information of the reconstructed model using two new nets,CAD-ClassNet and CAD-ReconNet;and finally,a 3D vector shape reconstruction of mechanical parts is presented to generate NURBS from the obtained shape parameters.The final reconstructed models show that the proposed approach is highly accurate,efficient,and practical.
基金the Researchers Supporting Project number(RSPD2024R848),King Saud University,Riyadh,Saudi Arabia.
文摘Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.
文摘Background The sense of touch plays a crucial role in interactive behavior within virtual spaces,particularly when visual attention is absent.Although haptic feedback has been widely used to compensate for the lack of visual cues,the use of tactile information as a predictive feedforward cue to guide hand movements remains unexplored and lacks theoretical understanding.Methods This study introduces a fingertip aero-haptic rendering method to investigate its effectiveness in directing hand movements during eyes-free spatial interactions.The wearable device incorporates a multichannel micro-airflow chamber to deliver adjustable tactile effects on the fingertips.Results The first study verified that tactile directional feedforward cues significantly improve user capabilities in eyes-free target acquisition and that users rely heavily on haptic indications rather than spatial memory to control their hands.A subsequent study examined the impact of enriched tactile feedforward cues on assisting users in determining precise target positions during eyes-free interactions,and assessed the required learning efforts.Conclusions The haptic feedforward effect holds great practical promise in eyeless design for virtual reality.We aim to integrate cognitive models and tactile feedforward cues in the future,and apply richer tactile feedforward information to alleviate users'perceptual deficiencies.
基金The National Natural Science Foundation of China under contract Nos 42176011 and 61931025the Fundamental Research Funds for the Central Universities of China under contract No.24CX03001A.
文摘Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface.
基金financially supported by the National Key R&D Program of China(No.2023YFF0803404)the Zhejiang Provincial Natural Science Foundation(No.LY23D040001)+4 种基金the Open Research Fund of Key Laboratory of Engineering Geophysical Prospecting and Detection of Chinese Geophysical Society(No.CJ2021GB01)the Open Re-search Fund of Changjiang River Scientific Research Institute(No.CKWV20221011/KY)the ZhouShan Science and Technology Project(No.2023C81010)the National Natural Science Foundation of China(No.41904100)supported by Chinese Natural Science Foundation Open Research Cruise(Cruise No.NORC2019–08)。
文摘INTRODUCTION.Crustal velocity model is crucial for describing the subsurface composition and structure,and has significant implications in offshore oil and gas exploration and marine geophysical engineering(Xie et al.,2024).Currently,travel time tomography is the most commonly used method for velocity modeling based on ocean bottom seismometer(OBS)data(Zhang et al.,2023;Sambolian et al.,2021).This method usually assumes that the sub-seafloor structure is layered,and therefore faces challenges in high-precision modeling with strong lateral discontinuities.
基金funded by National Natural Science Foundation of China,grant number 62071491.
文摘Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecasting.However,existing deep learning models frequently overlook the selective utilization of information from other production wells,resulting in suboptimal performance in long-term production forecasting across multiple wells.To achieve accurate long-term petroleum production forecast,we propose a spatial-geological perception graph convolutional neural network(SGP-GCN)that accounts for the temporal,spatial,and geological dependencies inherent in petroleum production.Utilizing the attention mechanism,the SGP-GCN effectively captures intricate correlations within production and geological data,forming the representations of each production well.Based on the spatial distances and geological feature correlations,we construct a spatial-geological matrix as the weight matrix to enable differential utilization of information from other wells.Additionally,a matrix sparsification algorithm based on production clustering(SPC)is also proposed to optimize the weight distribution within the spatial-geological matrix,thereby enhancing long-term forecasting performance.Empirical evaluations have shown that the SGP-GCN outperforms existing deep learning models,such as CNN-LSTM-SA,in long-term petroleum production forecasting.This demonstrates the potential of the SGP-GCN as a valuable tool for long-term petroleum production forecasting across multiple wells.
基金the National Natural Science Foundation of China(No.52474068)the Major Collab-orative Innovation Project of Prospecting Breakthrough Stra-tegic Action in Guizhou Province(No.[2022]ZD001-003).
文摘Coalbed methane(CBM)is a vital unconventional energy resource,and predicting its spatiotemporal pressure dynamics is crucial for efficient development strategies.This paper proposes a novel deep learningebased data-driven surrogate model,AxialViT-ConvLSTM,which integrates AxialAttention Vision Transformer,ConvLSTM,and an enhanced loss function to predict pressure dynamics in CBM reservoirs.The results showed that the model achieves a mean square error of 0.003,a learned perceptual image patch similarity of 0.037,a structural similarity of 0.979,and an R^(2) of 0.982 between predictions and actual pressures,indicating excellent performance.The model also demonstrates strong robustness and accuracy in capturing spatialetemporal pressure features.
基金Supported by the National Key R&D Program of China(No.2023YFB4502200)the National Natural Science Foundation of China(No.U22A2028,61925208,62222214,62341411,62102398,62102399,U20A20227,62302478,62302482,62302483,62302480,62302481)+2 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDB0660300,XDB0660301,XDB0660302)the Chinese Academy of Sciences Project for Young Scientists in Basic Research(No.YSBR-029)the Youth Innovation Promotion Association of Chinese Academy of Sciences and Xplore Prize.
文摘The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggle with this aspect.A potential reason is that the benchmarks used for training and evaluation may not adequately offer a diverse set of transferable tasks.Although recent studies have developed bench-marking environments to address this shortcoming,they typically fall short in providing tasks that both ensure a solid foundation for generalization and exhibit significant variability.To overcome these limitations,this work introduces the concept that‘objects are composed of more fundamental components’in environment design,as implemented in the proposed environment called summon the magic(StM).This environment generates tasks where objects are derived from extensible and shareable basic components,facilitating strategy reuse and enhancing generalization.Furthermore,two new metrics,adaptation sensitivity range(ASR)and parameter correlation coefficient(PCC),are proposed to better capture and evaluate the generalization process of RL agents.Experimental results show that increasing the number of basic components of the object reduces the proximal policy optimization(PPO)agent’s training-testing gap by 60.9%(in episode reward),significantly alleviating overfitting.Additionally,linear variations in other environmental factors,such as the training monster set proportion and the total number of basic components,uniformly decrease the gap by at least 32.1%.These results highlight StM’s effectiveness in benchmarking and probing the generalization capabilities of RL algorithms.
基金supported by the National Natural Science Foundation of China under Grant 62471493 and 62402257partially supported by the Natural Science Foundation of Shandong Province under Grant ZR2023LZH017,ZR2024MF066 and 2023QF025+2 种基金partially supported by the Open Research Subject of State Key Laboratory of Intelligent Game(No.ZBKF-24-12)partially supported by the Foundation of Key Laboratory of Education Informatization for Nationalities(Yunnan Normal University),the Ministry of Education(No.EIN2024C006)partially supported by the Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE(No.202306).
文摘As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitating data-driven decision making,fostering innovation ecosystems,and maintaining operational stability.In this study,we propose an advanced deployment algorithm for Service Function Chaining(SFC)that leverages an enhanced Practical Byzantine Fault Tolerance(PBFT)mechanism.The main goal is to tackle the issues of security and resource efficiency in SFC implementation across diverse network settings.By integrating blockchain technology and Deep Reinforcement Learning(DRL),our algorithm not only optimizes resource utilization and quality of service but also ensures robust security during SFC deployment.Specifically,the enhanced PBFT consensus mechanism(VRPBFT)significantly reduces consensus latency and improves Byzantine node detection through the introduction of a Verifiable Random Function(VRF)and a node reputation grading model.Experimental results demonstrate that compared to traditional PBFT,the proposed VRPBFT algorithm reduces consensus latency by approximately 30%and decreases the proportion of Byzantine nodes by 40%after 100 rounds of consensus.Furthermore,the DRL-based SFC deployment algorithm(SDRL)exhibits rapid convergence during training,with improvements in long-term average revenue,request acceptance rate,and revenue/cost ratio of 17%,14.49%,and 20.35%,respectively,over existing algorithms.Additionally,the CPU resource utilization of the SDRL algorithmreaches up to 42%,which is 27.96%higher than other algorithms.These findings indicate that the proposed algorithm substantially enhances resource utilization efficiency,service quality,and security in SFC deployment.
基金partially supported by the National Natural Science Foundation of China under Grants 62471493 and 62402257(for conceptualization and investigation)partially supported by the Natural Science Foundation of Shandong Province,China under Grants ZR2023LZH017,ZR2024MF066,and 2023QF025(for formal analysis and validation)+1 种基金partially supported by the Open Foundation of Key Laboratory of Computing Power Network and Information Security,Ministry of Education,Qilu University of Technology(Shandong Academy of Sciences)under Grant 2023ZD010(for methodology and model design)partially supported by the Russian Science Foundation(RSF)Project under Grant 22-71-10095-P(for validation and results verification).
文摘To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.
文摘Satellite communication systems provide a cost-effective solution for global internet of things(IoT)applications due to its large coverage and easy deployment.This paper mainly focuses on Satellite networks system,in which low earth orbit(LEO)satellites network collect sensing data from the user terminals(UTs)and then forward the data to ground station through geostationary earth orbit(GEO)satellites network.Considering the limited uplink transmission resources,this paper optimizes the uplink transmission scheduling scheme over LEO satellites.A novel transmission scheduling algorithm,which combined Algorithms of Simulated Annealing and Monte Carlo(SA-MC),is proposed to achieve the dynamic optimal scheduling scheme.Simulation results show the effectiveness of the proposed SA-MC algorithm in terms of cost value reduction and fast convergence.
基金Key Special Project of the National Key Research and Development Program of Ministry of Science and Technology(No.2017YFB1002300):Topic One:Multimodal Heterogeneous Efficient Acquisition of Traditional Chinese Medicine Big Data and Resource Library Construction(No.2017YFB1002301)and Topic Three:Multi-Scale Cognition Methods and Treatment Analysis Model of Traditional Chinese Medicine Based on Deep Learning(No.2017YFB1002303)from Big Data-Driven Traditional Chinese Medicine Intelligent Auxiliary Diagnostic Service SystemGraduation Design of“Cultivation Program”for Cross-cultivation of High-level Talents in Beijing Colleges and Universities in 2010(Scientific Research):the Research on the Clinical Diagnosis and Prediction System of Gastric Precancerous Lesions Based on Artificial Intelligence+2 种基金National Natural Science Foundation of China(No.30701071)the Sixth Batch of Academic Experience Inheritance of Traditional Chinese Medicine Experts(2017)“3+3”Project of Beijing Traditional Chinese Medicine Inheritance(No.2012-SZ-C-41)。
文摘OBJECTIVE:To explore the correlation between diagnostic information of tongue and gastroscopy results of patients with chronic gastritis.METHODS:Frequent pattern growth(FP-Growth),SPSS Modeler was used to analyze the correlation rules between the image information of tongue parameters and the characteristics of the stomach and duodenum seen under gastroscopy.RESULTS:Ranking in order of confidence:cyanotic tongue,slippery fur,yellow fur and spotted tongue were sequently associated with both gastric antrum mucosal hyperemia or edema and gastric antrum mucosal erythema/macula.L,one value of tongue coating color,which counted among(30,60),tooth-marked tongue and b,one value of tongue coating color,which counted in the range of(5,20)were sequently associated with gastric antrum mucosal erythema/macula.A,one value of tongue body color,which counted in the range of(0,20),was related to both gastric antrum mucosal hyperemia or edema and gastric antrum mucosal erythema/macula.a,one value of tongue coating color,which counted in the range of(15,35),was associated with gastric antrum mucosal erythema/macula.There are a total of 9 strong correlation rules.CONCLUSIONS:Cyanotic tongue,slippery fur,yellow fur,the CIE Lab value of tongue coating,a,the value of tongue body color,spotted tongue,and tooth-marked tongue are all related to the gastric antrum mucosal hyperemia or edema and gastric antrum mucosal erythema/macula.The conditions of gastric mucosa could be predicted by the examination of the above related image information of tongue.