The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet...The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely id...Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely identification of rockbursts.However,conventional processing encompasses multi-step workflows,including classification,denoising,picking,locating,and computational analysis,coupled with manual intervention,which collectively compromise the reliability of early warnings.To address these challenges,this study innovatively proposes the“microseismic stethoscope"-a multi-task machine learning and deep learning model designed for the automated processing of massive microseismic signals.This model efficiently extracts three key parameters that are necessary for recognizing rockburst disasters:rupture location,microseismic energy,and moment magnitude.Specifically,the model extracts raw waveform features from three dedicated sub-networks:a classifier for source zone classification,and two regressors for microseismic energy and moment magnitude estimation.This model demonstrates superior efficiency compared to traditional processing and semi-automated processing,reducing per-event processing time from 0.71 s to 0.49 s to merely 0.036 s.It concurrently achieves 98%accuracy in source zone classification,with microseismic energy and moment magnitude estimation errors of 0.13 and 0.05,respectively.This model has been well applied and validated in the Daxiagu Tunnel case in Sichuan,China.The application results indicate that the model is as accurate as traditional methods in determining source parameters,and thus can be used to identify potential geomechanical processes of rockburst disasters.By enhancing the signal processing reliability of microseismic events,the proposed model in this study presents a significant advancement in the identification of rockburst disasters.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
Surface properties of crystals are critical in many fields,including electrochemistry and photoelectronics,the efficient prediction of which can expedite the design and optimization of catalysts,batteries,alloys etc.H...Surface properties of crystals are critical in many fields,including electrochemistry and photoelectronics,the efficient prediction of which can expedite the design and optimization of catalysts,batteries,alloys etc.However,we are still far from realizing this vision due to the rarity of surface property-related databases,especially for multicomponent compounds,due to the large sample spaces and limited computing resources.In this work,we present a surface emphasized multi-task crystal graph convolutional neural network(SEM-CGCNN)to predict multiple surface properties simultaneously from crystal structures.The model is evaluated on a dataset of 3526 surface energies and work functions of binary magnesium intermetallics obtained through first-principles calculations,and obvious improvements are observed both in efficiency and accuracy over the original CGCNN model.By transferring the pre-trained model to the datasets of pure metals and other intermetallics,the fine-tuned SEM-CGCNN outperforms learning from scratch and can be further applied to other surface properties and materials systems.This study could be a paradigm for the end-to-end mapping of atomic structures to anisotropic surface properties of crystals,which provides an efficient framework to understand and screen materials with desired surface characteristics.展开更多
Reconfigurable intelligent surface(RIS)have been cast as a promising alternative to alleviate blockage vulnerability and enhance coverage capability for terahertz(THz)communications.Owing to large-scale array elements...Reconfigurable intelligent surface(RIS)have been cast as a promising alternative to alleviate blockage vulnerability and enhance coverage capability for terahertz(THz)communications.Owing to large-scale array elements at transceivers and RIS,the codebook based beamforming can be utilized in a computationally efficient manner.However,the codeword selection for analog beamforming is an intractable combinatorial optimization(CO)problem.To this end,by taking the CO problem as a classification problem,a multi-task learning based analog beam selection(MTL-ABS)framework is developed to implement cooperative beam selection concurrently at transceivers and RIS.In addition,residual network and self-attention mechanism are used to combat the network degradation and mine intrinsic THz channel features.Finally,the network convergence is analyzed from a blockwise perspective,and numerical results demonstrate that the MTL-ABS framework greatly decreases the beam selection overhead and achieves near optimal sum-rate compared with heuristic search based counterparts.展开更多
Background:The long-term outcomes of robotic-assisted surgery and the prognostic significance of the pretreatment neutrophil-to-lymphocyte ratio(NLR)in locally advanced rectal cancer(LARC)remain uncertain.This study a...Background:The long-term outcomes of robotic-assisted surgery and the prognostic significance of the pretreatment neutrophil-to-lymphocyte ratio(NLR)in locally advanced rectal cancer(LARC)remain uncertain.This study aimed to assess the long-term outcomes of patients with LARC undergoing robotic-assisted surgery and to determine the prognostic value of pretreatment NLR.Methods:We retrospectively reviewed 252 patients with LARC who were treated at a single medical center in Taiwan between January 2012 and January 2023.All patients underwent neoadjuvant concurrent chemoradiotherapy(CRT)followed by robotic-assisted surgery with total mesorectal excision(TME).Patients were stratified into four groups on the basis of pretreatment NLRs and carcinoembryonic antigen(CEA)levels.Univariate and multivariate analyses were conducted to identify prognostic indicators for overall survival(OS)and disease-free survival(DFS).Results:Patients with a pretreatment NLR of≥3.2 exhibited significantly worse OS and DFS compared with those with an NLR of<3.2(OS:94.4 vs.116.5 months,p=0.001;DFS:78.8 vs.101.7 months,p=0.003).Group A exhibited the poorest prognosis,whereas Group D had the most favorable outcomes.Multivariate analysis revealed NLR≥3.2 as an independent predictor of poor OS(hazard ratio[HR]=2.306,95%CI:1.149-3.747;p=0.001)and DFS(HR=2.055,95%CI:1.341-3.148;p=0.001).Conclusion:Neoadjuvant concurrent CRT followed by robotic-assisted TME is an effective treatment strategy for LARC.A higher pretreatment NLR(≥3.2)independently predicted worse OS and DFS.Stratification using the NLR in combination with CEA levels may enhance prognostic accuracy for patients undergoing robotic-assisted surgery for LARC.展开更多
The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.How...The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.展开更多
An integrated method for concurrency control in parallel real-time databases has been proposed in this paper. The nested transaction model has been investigated to offer more atomic execution units and finer grained c...An integrated method for concurrency control in parallel real-time databases has been proposed in this paper. The nested transaction model has been investigated to offer more atomic execution units and finer grained control within in a transaction. Based on the classical nested locking protocol and the speculative concurrency control approach, a two-shadow adaptive concurrency control protocol, which combines the Sacrifice based Optimistic Concurrency Control (OPT-Sacrifice) and High Priority two-phase locking (HP2PL) algorithms together to support both optimistic and pessimistic shadow of each sub-transaction, has been proposed to increase the likelihood of successful timely commitment and to avoid unnecessary replication overload.展开更多
Secure real-time databases must simultaneously satisfy two requirements in guaranteeing data security and minimizing the missing deadlines ratio of transactions. However, these two requirements can conflict with each ...Secure real-time databases must simultaneously satisfy two requirements in guaranteeing data security and minimizing the missing deadlines ratio of transactions. However, these two requirements can conflict with each other and achieve one requirement is to sacrifice the other. This paper presents a secure real-time concurrency control protocol based on optimistic method. The concurrency control protocol incorporates security constraints in a real-time optimistic concurrency control protocol and makes a suitable tradeoff between security and real-time requirements by introducing secure influence factor and real-time influence factor. The experimental results show the concurrency control protocol achieves data security without degrading real-time perform ance significantly.展开更多
High rates of overlapping sexual relationships (concurrency) are believed to be important in the generation of generalized HIV epidemics in sub-Saharan Africa. Different authors favor socioeconomic, gender-equity or c...High rates of overlapping sexual relationships (concurrency) are believed to be important in the generation of generalized HIV epidemics in sub-Saharan Africa. Different authors favor socioeconomic, gender-equity or cultural explanations for the high concurrency rates in this region. We performed linear regression to analyze the association between the point-prevalence of concurrency in 15 - 49 years old males and various indicators of socioeconomic status and gender-equity using data from 11 countries surveyed in 1989/1990. We found no meaningful association between concurrency and the various markers of socioeconomic status and gender-equity. This analysis supports the findings of other studies that high concurrency rates in sub-Saharan Africa could be reduced without having to address socioeconomic and gender-equity factors.展开更多
Rust is a system-level programming language that provides thread and memory safety guarantee through a suite of static compiler checking rules and prevents segmentation errors.However,since compiler checking is too st...Rust is a system-level programming language that provides thread and memory safety guarantee through a suite of static compiler checking rules and prevents segmentation errors.However,since compiler checking is too strict to confine Rust's programmability,the developers prefer to use the keyword"unsafe"to bypass compiler checking,through which the caller could interact with OS directly.Unfortunately,the code block with"unsafe"would easily lead to some serious bugs such as memory safety violation,race condition and so on.In this paper,to verify memory and concurrency safety of Rust programs,we present RSMC(Safety Model Checker for Rust),a tool based on Smack to detect concurrency bugs and memory safety errors in Rust programs,in which we combine concurrency primitives model checking and memory boundary model checking.RSMC,with an assertion generator,can automatically insert assertions and requires no programmer annotations to verify Rust programs.We evaluate RSMC on two categories of Rust programs,and the result shows that RSMC can effectively find concurrency bugs and memory safety errors in vulnerable Rust programs,which include unsafe code.展开更多
Remaining time prediction of business processes plays an important role in resource scheduling and plan making.The structural features of single process instance and the concurrent running of multiple process instance...Remaining time prediction of business processes plays an important role in resource scheduling and plan making.The structural features of single process instance and the concurrent running of multiple process instances are the main factors that affect the accuracy of the remaining time prediction.Existing prediction methods does not take full advantage of these two aspects into consideration.To address this issue,a new prediction method based on trace representation is proposed.More specifically,we first associate the prefix set generated by the event log to different states of the transition system,and encode the structural features of the prefixes in the state.Then,an annotation containing the feature representation for the prefix and the corresponding remaining time are added to each state to obtain an extended transition system.Next,states in the extended transition system are partitioned by the different lengths of the states,which considers concurrency among multiple process instances.Finally,the long short-term memory(LSTM)deep recurrent neural networks are applied to each partition for predicting the remaining time of new running instances.By extensive experimental evaluation using synthetic event logs and reallife event logs,we show that the proposed method outperforms existing baseline methods.展开更多
In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can e...In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can enhance the performance of real-time concurrency control mechanism by reducing the number of transactions that might miss their deadlines, and compare the performance of validation concurrency control protocol with that of HP2PL(High priority two phase locking) protocol and OCC-TI-WAIT-50(Optimistic concurrency control-time interval-wait-50) protocol under shared-disk architecture by simulation. The simulation results reveal that the protocol the author presented can effectively reduce the number of transactions restarting which might miss their deadlines and performs better than HP2PL and OCC-TI-WAIT-50. It works well when arrival rate of transaction is lesser than threshold. However, due to resource contention the percentage of missing deadline increases sharply when arrival rate is greater than the threshold.展开更多
The lessons of history indicate that mismanagement of natural resources and the environment often leads to potentially adverse consequences. The increasing interest in economic development, particularly in the develop...The lessons of history indicate that mismanagement of natural resources and the environment often leads to potentially adverse consequences. The increasing interest in economic development, particularly in the developing countries of the world coupled with increasing population pressures and the globalization of economic activity is placing noticeable stresses on the ultimate sustainability of both human and environmental systems. Sustainable development is not a new concept. It has been an area of concern for different elements of society for some time. Yet efforts to understand the implications of sustainable development have not, until recently, been formalized. We have focused singularly on economic development and environmental quality as if they were mutually exclusive. This paper focuses on the concept of concurrency as both a conceptual framework and practicable method of understanding and implementing the ecology and economy of sustainability.展开更多
Concurrency control is a critical technology and one of the problems in CSCW systems. With the development of agent based technology, it has also been applied to research and development of CSCW systems. An Agent ba...Concurrency control is a critical technology and one of the problems in CSCW systems. With the development of agent based technology, it has also been applied to research and development of CSCW systems. An Agent based method for concurrency control in CSCW is explored in this paper. This new way is achieved by making use of the thoughts of AOP (Agent Oriented Programming) to improve traditional locking method, on the basis of researching characteristics and functional requirements of concurrency control in CSCW, and analyzing various usually used concurrency control methods. All amendments to locking method are done on the basis of the analysis of limitations brought by locking. In this paper, a new algorithm supporting queue of locking request for Agent based concurrency control is also presented. All above aspects are discussed in some detail in this paper.展开更多
Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of sa...Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of satisfying the timing constraints of transactions, serializability is too strong as a correctness criterion and not suitable for real time databases in most cases. On the other hand, relaxed serializability including epsilon serializability and similarity serializability can allow more real time transactions to satisfy their timing constraints, but database consistency may be sacrificed to some extent. We thus propose the use of weak serializability(WSR) that is more relaxed than conflicting serializability while database consistency is maintained. In this paper, we first formally define the new notion of correctness called weak serializability. After the necessary and sufficient conditions for weak serializability are shown, corresponding concurrency control protocol WDHP(weak serializable distributed high priority protocol) is outlined for distributed real time databases, where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency. Finally, through a series of simulation studies, it is shown that using the new concurrency control protocol the performance of distributed real time databases can be greatly improved.展开更多
Objective: The prevalence of syphilis differs considerably between different populations and indi-vidual level risk factors such as number of sex partners seem unable to completely explain these differences. The effec...Objective: The prevalence of syphilis differs considerably between different populations and indi-vidual level risk factors such as number of sex partners seem unable to completely explain these differences. The effect of network level factors, such as the prevalence of partner concurrency, on syphilis prevalence has not hitherto been investigated. Study design: Linear regression was per-formed to assess the relationship between the prevalence of male concurrency and prevalence of syphilis in each of 11 countries for which we could obtain comparable data. The data for concur-rency prevalence was taken from the WHO/Global Programme on AIDS (GPA) sexual behavioural surveys. Syphilis prevalence rates were obtained from antenatal syphilis serology surveys done in the same countries. In addition, we used linear regression to assess if there was a relationship between syphilis and concurrency prevalence of various racial and ethnic groups within the United States and South Africa. Results: In the international study, we found a strong relationship between the prevalence of male concurrency and syphilis prevalence (r = 0.79, P = 0.003). In the subnational studies, the relationship between concurrency and syphilis prevalence was positive in all cases but was only statistically significant so in the case of South Africa’s racial groups (r = 0.98, P = 0.01). Conclusions: The findings of an ecological-level association between syphilis and partner concurrency need to be replicated but suggest that efforts directed towards decreasing partner concurrency may reduce syphilis prevalence.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Due to the various performance requirements and data access restrictions of different types of real-time transactions, concurrency control protocols which had been designed for the systems with single type of transact...Due to the various performance requirements and data access restrictions of different types of real-time transactions, concurrency control protocols which had been designed for the systems with single type of transactions are not sufficient for mixed real-time database systems (MRTDBS), where different types of real-time transactions coexist in the systems concurrently. In this paper, a new concurrency control protocol MRTT_CC for mixed real-time transactions is proposed. The new strategy integrates with different concurrency control protocols to meet the deadline requirements of different types of real-time transactions. The data similarity concept is also explored in the new protocol to reduce the blocking time of soft real-time transactions, which increases their chances to meet the deadlines. Simulation experiments show that the new protocol has gained good performance.展开更多
The problem of maintaining data consistency in mobile broadcast environments is researched. Quasi serializability is formally defined and analyzed at first. It was shown that quasi serializability is less stringent th...The problem of maintaining data consistency in mobile broadcast environments is researched. Quasi serializability is formally defined and analyzed at first. It was shown that quasi serializability is less stringent than serializability when database consistency is maintained for transactions. Then, corresponding concurrency control protocol that supports both update transactions and read-only transactions is outlined for mobile broadcast environments. Finally, the simulation results confirmed that the proposed protocol could improve the response time significantly.展开更多
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
基金supported by the National Natural Science Foundation of China(Grant Nos.42130719 and 42177173)the Doctoral Direct Train Project of Chongqing Natural Science Foundation(Grant No.CSTB2023NSCQ-BSX0029).
文摘Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely identification of rockbursts.However,conventional processing encompasses multi-step workflows,including classification,denoising,picking,locating,and computational analysis,coupled with manual intervention,which collectively compromise the reliability of early warnings.To address these challenges,this study innovatively proposes the“microseismic stethoscope"-a multi-task machine learning and deep learning model designed for the automated processing of massive microseismic signals.This model efficiently extracts three key parameters that are necessary for recognizing rockburst disasters:rupture location,microseismic energy,and moment magnitude.Specifically,the model extracts raw waveform features from three dedicated sub-networks:a classifier for source zone classification,and two regressors for microseismic energy and moment magnitude estimation.This model demonstrates superior efficiency compared to traditional processing and semi-automated processing,reducing per-event processing time from 0.71 s to 0.49 s to merely 0.036 s.It concurrently achieves 98%accuracy in source zone classification,with microseismic energy and moment magnitude estimation errors of 0.13 and 0.05,respectively.This model has been well applied and validated in the Daxiagu Tunnel case in Sichuan,China.The application results indicate that the model is as accurate as traditional methods in determining source parameters,and thus can be used to identify potential geomechanical processes of rockburst disasters.By enhancing the signal processing reliability of microseismic events,the proposed model in this study presents a significant advancement in the identification of rockburst disasters.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
基金supported by the National Key R&D Program(No.2021YFB3501002)supported by the Ministry of Science and Technology of China,National Natural Science Foundation of China(No.51825101,52127801).
文摘Surface properties of crystals are critical in many fields,including electrochemistry and photoelectronics,the efficient prediction of which can expedite the design and optimization of catalysts,batteries,alloys etc.However,we are still far from realizing this vision due to the rarity of surface property-related databases,especially for multicomponent compounds,due to the large sample spaces and limited computing resources.In this work,we present a surface emphasized multi-task crystal graph convolutional neural network(SEM-CGCNN)to predict multiple surface properties simultaneously from crystal structures.The model is evaluated on a dataset of 3526 surface energies and work functions of binary magnesium intermetallics obtained through first-principles calculations,and obvious improvements are observed both in efficiency and accuracy over the original CGCNN model.By transferring the pre-trained model to the datasets of pure metals and other intermetallics,the fine-tuned SEM-CGCNN outperforms learning from scratch and can be further applied to other surface properties and materials systems.This study could be a paradigm for the end-to-end mapping of atomic structures to anisotropic surface properties of crystals,which provides an efficient framework to understand and screen materials with desired surface characteristics.
文摘Reconfigurable intelligent surface(RIS)have been cast as a promising alternative to alleviate blockage vulnerability and enhance coverage capability for terahertz(THz)communications.Owing to large-scale array elements at transceivers and RIS,the codebook based beamforming can be utilized in a computationally efficient manner.However,the codeword selection for analog beamforming is an intractable combinatorial optimization(CO)problem.To this end,by taking the CO problem as a classification problem,a multi-task learning based analog beam selection(MTL-ABS)framework is developed to implement cooperative beam selection concurrently at transceivers and RIS.In addition,residual network and self-attention mechanism are used to combat the network degradation and mine intrinsic THz channel features.Finally,the network convergence is analyzed from a blockwise perspective,and numerical results demonstrate that the MTL-ABS framework greatly decreases the beam selection overhead and achieves near optimal sum-rate compared with heuristic search based counterparts.
基金supported by grants through funding from the National Science and Technology Council(NSTC112-2314-B-037-050-MY3,NSTC114-2314-B-037-103-MY3,NSTC114-2321-B-037-003)the Ministry of Health and Welfare(MOHW113-TDU-B-222-134014)+3 种基金funded by the health and welfare surcharge of on tobacco products,and the Kaohsiung Medical University Hospital(KMUH113-3R31,KMUH113-3R32,KMUH113-3R33,KMUH113-3M58,KMUH113-3M59,KMUH-S11412,KMUH-SH11403)Kaohsiung Medical University Research Center Grant(KMU-TC113A04)National Tsing Hua University-Kaohsiung Medical University Joint Research Project(NTHU-KMU-KT114P008)supported by the Grant of Taiwan Precision Medicine Initiative and Taiwan Biobank,Academia Sinica,Taiwan.
文摘Background:The long-term outcomes of robotic-assisted surgery and the prognostic significance of the pretreatment neutrophil-to-lymphocyte ratio(NLR)in locally advanced rectal cancer(LARC)remain uncertain.This study aimed to assess the long-term outcomes of patients with LARC undergoing robotic-assisted surgery and to determine the prognostic value of pretreatment NLR.Methods:We retrospectively reviewed 252 patients with LARC who were treated at a single medical center in Taiwan between January 2012 and January 2023.All patients underwent neoadjuvant concurrent chemoradiotherapy(CRT)followed by robotic-assisted surgery with total mesorectal excision(TME).Patients were stratified into four groups on the basis of pretreatment NLRs and carcinoembryonic antigen(CEA)levels.Univariate and multivariate analyses were conducted to identify prognostic indicators for overall survival(OS)and disease-free survival(DFS).Results:Patients with a pretreatment NLR of≥3.2 exhibited significantly worse OS and DFS compared with those with an NLR of<3.2(OS:94.4 vs.116.5 months,p=0.001;DFS:78.8 vs.101.7 months,p=0.003).Group A exhibited the poorest prognosis,whereas Group D had the most favorable outcomes.Multivariate analysis revealed NLR≥3.2 as an independent predictor of poor OS(hazard ratio[HR]=2.306,95%CI:1.149-3.747;p=0.001)and DFS(HR=2.055,95%CI:1.341-3.148;p=0.001).Conclusion:Neoadjuvant concurrent CRT followed by robotic-assisted TME is an effective treatment strategy for LARC.A higher pretreatment NLR(≥3.2)independently predicted worse OS and DFS.Stratification using the NLR in combination with CEA levels may enhance prognostic accuracy for patients undergoing robotic-assisted surgery for LARC.
基金supported by the National Natural Science Foundation of China(Nos.62003115 and 11972130)the Shenzhen Science and Technology Program,China(JCYJ20220818102207015)the Heilongjiang Touyan Team Program,China。
文摘The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.
文摘An integrated method for concurrency control in parallel real-time databases has been proposed in this paper. The nested transaction model has been investigated to offer more atomic execution units and finer grained control within in a transaction. Based on the classical nested locking protocol and the speculative concurrency control approach, a two-shadow adaptive concurrency control protocol, which combines the Sacrifice based Optimistic Concurrency Control (OPT-Sacrifice) and High Priority two-phase locking (HP2PL) algorithms together to support both optimistic and pessimistic shadow of each sub-transaction, has been proposed to increase the likelihood of successful timely commitment and to avoid unnecessary replication overload.
基金Supported by the Defense Pre-Research Project ofthe"Tenth Five-Year-Plan"of China (413150403)
文摘Secure real-time databases must simultaneously satisfy two requirements in guaranteeing data security and minimizing the missing deadlines ratio of transactions. However, these two requirements can conflict with each other and achieve one requirement is to sacrifice the other. This paper presents a secure real-time concurrency control protocol based on optimistic method. The concurrency control protocol incorporates security constraints in a real-time optimistic concurrency control protocol and makes a suitable tradeoff between security and real-time requirements by introducing secure influence factor and real-time influence factor. The experimental results show the concurrency control protocol achieves data security without degrading real-time perform ance significantly.
文摘High rates of overlapping sexual relationships (concurrency) are believed to be important in the generation of generalized HIV epidemics in sub-Saharan Africa. Different authors favor socioeconomic, gender-equity or cultural explanations for the high concurrency rates in this region. We performed linear regression to analyze the association between the point-prevalence of concurrency in 15 - 49 years old males and various indicators of socioeconomic status and gender-equity using data from 11 countries surveyed in 1989/1990. We found no meaningful association between concurrency and the various markers of socioeconomic status and gender-equity. This analysis supports the findings of other studies that high concurrency rates in sub-Saharan Africa could be reduced without having to address socioeconomic and gender-equity factors.
基金Supported by the National Basic Research Program of China(973 Program)(2014CB340601)。
文摘Rust is a system-level programming language that provides thread and memory safety guarantee through a suite of static compiler checking rules and prevents segmentation errors.However,since compiler checking is too strict to confine Rust's programmability,the developers prefer to use the keyword"unsafe"to bypass compiler checking,through which the caller could interact with OS directly.Unfortunately,the code block with"unsafe"would easily lead to some serious bugs such as memory safety violation,race condition and so on.In this paper,to verify memory and concurrency safety of Rust programs,we present RSMC(Safety Model Checker for Rust),a tool based on Smack to detect concurrency bugs and memory safety errors in Rust programs,in which we combine concurrency primitives model checking and memory boundary model checking.RSMC,with an assertion generator,can automatically insert assertions and requires no programmer annotations to verify Rust programs.We evaluate RSMC on two categories of Rust programs,and the result shows that RSMC can effectively find concurrency bugs and memory safety errors in vulnerable Rust programs,which include unsafe code.
基金supported by National Natural Science Foundation of China(No.U1931207 and No.61702306)Sci.&Tech.Development Fund of Shandong Province of China(No.ZR2019LZH001,No.ZR2017BF015 and No.ZR2017MF027)+4 种基金the Humanities and Social Science Research Project of the Ministry of Education(No.18YJAZH017)Shandong Chongqing Science and technology cooperation project(No.cstc2020jscx-lyjsAX0008)Sci.&Tech.Development Fund of Qingdao(No.21-1-5-zlyj-1-zc)the Taishan Scholar Program of Shandong ProvinceSDUST Research Fund(No.2015TDJH102 and No.2019KJN024).
文摘Remaining time prediction of business processes plays an important role in resource scheduling and plan making.The structural features of single process instance and the concurrent running of multiple process instances are the main factors that affect the accuracy of the remaining time prediction.Existing prediction methods does not take full advantage of these two aspects into consideration.To address this issue,a new prediction method based on trace representation is proposed.More specifically,we first associate the prefix set generated by the event log to different states of the transition system,and encode the structural features of the prefixes in the state.Then,an annotation containing the feature representation for the prefix and the corresponding remaining time are added to each state to obtain an extended transition system.Next,states in the extended transition system are partitioned by the different lengths of the states,which considers concurrency among multiple process instances.Finally,the long short-term memory(LSTM)deep recurrent neural networks are applied to each partition for predicting the remaining time of new running instances.By extensive experimental evaluation using synthetic event logs and reallife event logs,we show that the proposed method outperforms existing baseline methods.
文摘In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can enhance the performance of real-time concurrency control mechanism by reducing the number of transactions that might miss their deadlines, and compare the performance of validation concurrency control protocol with that of HP2PL(High priority two phase locking) protocol and OCC-TI-WAIT-50(Optimistic concurrency control-time interval-wait-50) protocol under shared-disk architecture by simulation. The simulation results reveal that the protocol the author presented can effectively reduce the number of transactions restarting which might miss their deadlines and performs better than HP2PL and OCC-TI-WAIT-50. It works well when arrival rate of transaction is lesser than threshold. However, due to resource contention the percentage of missing deadline increases sharply when arrival rate is greater than the threshold.
文摘The lessons of history indicate that mismanagement of natural resources and the environment often leads to potentially adverse consequences. The increasing interest in economic development, particularly in the developing countries of the world coupled with increasing population pressures and the globalization of economic activity is placing noticeable stresses on the ultimate sustainability of both human and environmental systems. Sustainable development is not a new concept. It has been an area of concern for different elements of society for some time. Yet efforts to understand the implications of sustainable development have not, until recently, been formalized. We have focused singularly on economic development and environmental quality as if they were mutually exclusive. This paper focuses on the concept of concurrency as both a conceptual framework and practicable method of understanding and implementing the ecology and economy of sustainability.
文摘Concurrency control is a critical technology and one of the problems in CSCW systems. With the development of agent based technology, it has also been applied to research and development of CSCW systems. An Agent based method for concurrency control in CSCW is explored in this paper. This new way is achieved by making use of the thoughts of AOP (Agent Oriented Programming) to improve traditional locking method, on the basis of researching characteristics and functional requirements of concurrency control in CSCW, and analyzing various usually used concurrency control methods. All amendments to locking method are done on the basis of the analysis of limitations brought by locking. In this paper, a new algorithm supporting queue of locking request for Agent based concurrency control is also presented. All above aspects are discussed in some detail in this paper.
文摘Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of satisfying the timing constraints of transactions, serializability is too strong as a correctness criterion and not suitable for real time databases in most cases. On the other hand, relaxed serializability including epsilon serializability and similarity serializability can allow more real time transactions to satisfy their timing constraints, but database consistency may be sacrificed to some extent. We thus propose the use of weak serializability(WSR) that is more relaxed than conflicting serializability while database consistency is maintained. In this paper, we first formally define the new notion of correctness called weak serializability. After the necessary and sufficient conditions for weak serializability are shown, corresponding concurrency control protocol WDHP(weak serializable distributed high priority protocol) is outlined for distributed real time databases, where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency. Finally, through a series of simulation studies, it is shown that using the new concurrency control protocol the performance of distributed real time databases can be greatly improved.
文摘Objective: The prevalence of syphilis differs considerably between different populations and indi-vidual level risk factors such as number of sex partners seem unable to completely explain these differences. The effect of network level factors, such as the prevalence of partner concurrency, on syphilis prevalence has not hitherto been investigated. Study design: Linear regression was per-formed to assess the relationship between the prevalence of male concurrency and prevalence of syphilis in each of 11 countries for which we could obtain comparable data. The data for concur-rency prevalence was taken from the WHO/Global Programme on AIDS (GPA) sexual behavioural surveys. Syphilis prevalence rates were obtained from antenatal syphilis serology surveys done in the same countries. In addition, we used linear regression to assess if there was a relationship between syphilis and concurrency prevalence of various racial and ethnic groups within the United States and South Africa. Results: In the international study, we found a strong relationship between the prevalence of male concurrency and syphilis prevalence (r = 0.79, P = 0.003). In the subnational studies, the relationship between concurrency and syphilis prevalence was positive in all cases but was only statistically significant so in the case of South Africa’s racial groups (r = 0.98, P = 0.01). Conclusions: The findings of an ecological-level association between syphilis and partner concurrency need to be replicated but suggest that efforts directed towards decreasing partner concurrency may reduce syphilis prevalence.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
文摘Due to the various performance requirements and data access restrictions of different types of real-time transactions, concurrency control protocols which had been designed for the systems with single type of transactions are not sufficient for mixed real-time database systems (MRTDBS), where different types of real-time transactions coexist in the systems concurrently. In this paper, a new concurrency control protocol MRTT_CC for mixed real-time transactions is proposed. The new strategy integrates with different concurrency control protocols to meet the deadline requirements of different types of real-time transactions. The data similarity concept is also explored in the new protocol to reduce the blocking time of soft real-time transactions, which increases their chances to meet the deadlines. Simulation experiments show that the new protocol has gained good performance.
文摘The problem of maintaining data consistency in mobile broadcast environments is researched. Quasi serializability is formally defined and analyzed at first. It was shown that quasi serializability is less stringent than serializability when database consistency is maintained for transactions. Then, corresponding concurrency control protocol that supports both update transactions and read-only transactions is outlined for mobile broadcast environments. Finally, the simulation results confirmed that the proposed protocol could improve the response time significantly.