Every year, around the world, between 250,000 and 500,000 people suffer a spinal cord injury(SCI). SCI is a devastating medical condition that arises from trauma or disease-induced damage to the spinal cord, disruptin...Every year, around the world, between 250,000 and 500,000 people suffer a spinal cord injury(SCI). SCI is a devastating medical condition that arises from trauma or disease-induced damage to the spinal cord, disrupting the neural connections that allow communication between the brain and the rest of the body, which results in varying degrees of motor and sensory impairment. Disconnection in the spinal tracts is an irreversible condition owing to the poor capacity for spontaneous axonal regeneration in the affected neurons.展开更多
Assessing the stability of pillars in underground mines(especially in deep underground mines)is a critical concern during both the design and the operational phases of a project.This study mainly focuses on developing...Assessing the stability of pillars in underground mines(especially in deep underground mines)is a critical concern during both the design and the operational phases of a project.This study mainly focuses on developing two practical models to predict pillar stability status.For this purpose,two robust models were developed using a database including 236 case histories from seven underground hard rock mines,based on gene expression programming(GEP)and decision tree-support vector machine(DT-SVM)hybrid algorithms.The performance of the developed models was evaluated based on four common statistical criteria(sensitivity,specificity,Matthews correlation coefficient,and accuracy),receiver operating characteristic(ROC)curve,and testing data sets.The results showed that the GEP and DT-SVM models performed exceptionally well in assessing pillar stability,showing a high level of accuracy.The DT-SVM model,in particular,outperformed the GEP model(accuracy of 0.914,sensitivity of 0.842,specificity of 0.929,Matthews correlation coefficient of 0.767,and area under the ROC of 0.897 for the test data set).Furthermore,upon comparing the developed models with the previous ones,it was revealed that both models can effectively determine the condition of pillar stability with low uncertainty and acceptable accuracy.This suggests that these models could serve as dependable tools for project managers,aiding in the evaluation of pillar stability during the design and operational phases of mining projects,despite the inherent challenges in this domain.展开更多
As a critical component of the in situ stress state,determination of the minimum horizontal principal stress plays a significant role in both geotechnical and petroleum engineering.To this end,a gene expression progra...As a critical component of the in situ stress state,determination of the minimum horizontal principal stress plays a significant role in both geotechnical and petroleum engineering.To this end,a gene expression programming(GEP)algorithm-based model,in which the data of borehole breakout size,vertical principal stress,and rock strength characteristics are used as the inputs,is proposed to predict the minimum horizontal principal stress.Seventy-nine(79)samples with seven features are collected to construct the minimum horizontal principal stress dataset used for training models.Twenty-four(24)GEP model hyperparameter sets were configured to explore the key parameter combinations among the inputs and their potential relationships with the minimum horizontal principal stresses.Model performance was evaluated using root mean squared error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE),and coefficient of determination(R^(2)).By comparing predictive performance and parameter composition,two models were selected from 24 GEP models that demonstrated excellent predictive performance and simpler parameter composition.Compared with prevalent models,the results indicate that the two selected GEP models have better performance on the test set(R^(2)=0.9568 and 0.9621).Additionally,the results conducted by SHapley Additive exPlanations(SHAP)sensitivity analysis and Local Interpretable Model-agnostic Explanations(LIME)demonstrate that the vertical principal stress is the most influential parameter in both GEP models.The two GEP models have simple parameter compositions as well as stable and excellent prediction performance,which is a viable method for predicting the minimum horizontal principal stresses.展开更多
Over the last two decades,the dogma that cell fate is immutable has been increasingly challenged,with important implications for regenerative medicine.The brea kth rough discovery that induced pluripotent stem cells c...Over the last two decades,the dogma that cell fate is immutable has been increasingly challenged,with important implications for regenerative medicine.The brea kth rough discovery that induced pluripotent stem cells could be generated from adult mouse fibroblasts is powerful proof that cell fate can be changed.An exciting extension of the discovery of cell fate impermanence is the direct cellular reprogram ming hypothesis-that terminally differentiated cells can be reprogrammed into other adult cell fates without first passing through a stem cell state.展开更多
The brain's extracellular matrix(ECM),which is comprised of protein and glycosaminoglycan(GAG)scaffolds,constitutes 20%-40% of the human brain and is considered one of the largest influencers on brain cell functio...The brain's extracellular matrix(ECM),which is comprised of protein and glycosaminoglycan(GAG)scaffolds,constitutes 20%-40% of the human brain and is considered one of the largest influencers on brain cell functioning(Soles et al.,2023).Synthesized by neural and glial cells,the brain's ECM regulates a myriad of homeostatic cellular processes,including neuronal plasticity and firing(Miyata et al.,2012),cation buffering(Moraws ki et al.,2015),and glia-neuron interactions(Anderson et al.,2016).Considering the diversity of functions,dynamic remodeling of the brain's ECM indicates that this understudied medium is an active participant in both normal physiology and neurological diseases.展开更多
Generating dynamically feasible trajectory for fixed-wing Unmanned Aerial Vehicles(UAVs)in dense obstacle environments remains computationally intractable.This paper proposes a Safe Flight Corridor constrained Sequent...Generating dynamically feasible trajectory for fixed-wing Unmanned Aerial Vehicles(UAVs)in dense obstacle environments remains computationally intractable.This paper proposes a Safe Flight Corridor constrained Sequential Convex Programming(SFC-SCP)to improve the computation efficiency and reliability of trajectory generation.SFC-SCP combines the front-end convex polyhedron SFC construction and back-end SCP-based trajectory optimization.A Sparse A^(*)Search(SAS)driven SFC construction method is designed to efficiently generate polyhedron SFC according to the geometric relation among obstacles and collision-free waypoints.Via transforming the nonconvex obstacle-avoidance constraints to linear inequality constraints,SFC can mitigate infeasibility of trajectory planning and reduce computation complexity.Then,SCP casts the nonlinear trajectory optimization subject to SFC into convex programming subproblems to decrease the problem complexity.In addition,a convex optimizer based on interior point method is customized,where the search direction is calculated via successive elimination to further improve efficiency.Simulation experiments on dense obstacle scenarios show that SFC-SCP can generate dynamically feasible safe trajectory rapidly.Comparative studies with state-of-the-art SCP-based methods demonstrate the efficiency and reliability merits of SFC-SCP.Besides,the customized convex optimizer outperforms off-the-shelf optimizers in terms of computation time.展开更多
This study proposes a novel approach to optimizing individual work schedules for book digitization using mixed-integer programming (MIP). By leveraging the power of MIP solvers, we aimed to minimize the overall digiti...This study proposes a novel approach to optimizing individual work schedules for book digitization using mixed-integer programming (MIP). By leveraging the power of MIP solvers, we aimed to minimize the overall digitization time while considering various constraints and process dependencies. The book digitization process involves three key steps: cutting, scanning, and binding. Each step has specific requirements and limitations such as the number of pages that can be processed simultaneously and potential bottlenecks. To address these complexities, we formulate the problem as a one-machine job shop scheduling problem with additional constraints to capture the unique characteristics of book digitization. We conducted a series of experiments to evaluate the performance of our proposed approach. By comparing the optimized schedules with the baseline approach, we demonstrated significant reductions in the overall processing time. In addition, we analyzed the impact of different weighting schemes on the optimization results, highlighting the importance of identifying and prioritizing critical processes. Our findings suggest that MIP-based optimization can be a valuable tool for improving the efficiency of individual work schedules, even in seemingly simple tasks, such as book digitization. By carefully considering specific constraints and objectives, we can save time and leverage resources by carefully considering specific constraints and objectives.展开更多
With the rapid development of artificial intelligence technology,AIGC(Artificial Intelligence-Generated Content)has triggered profound changes in the field of high-level language programming courses.This paper deeply ...With the rapid development of artificial intelligence technology,AIGC(Artificial Intelligence-Generated Content)has triggered profound changes in the field of high-level language programming courses.This paper deeply explored the application principles,advantages,and limitations of AIGC in intelligent code generation,analyzed the new mode of human-computer collaboration in high-level language programming courses driven by AIGC,discussed the impact of human-computer collaboration on programming efficiency and code quality through practical case studies,and looks forward to future development trends.This research aims to provide theoretical and practical guidance for high-level language programming courses and promote innovative development of high-level language programming courses under the human-computer collaboration paradigm.展开更多
Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CI...Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CIM has the advantage of high computing density,non-volatility as well as high energy efficiency.However,previous CIM research has predominantly focused on realizing high energy efficiency and high area efficiency for inference,while little attention has been devoted to addressing the challenges of on-chip programming speed,power consumption,and accuracy.In this paper,a fabri-cated 28 nm 576K RRAM-based CIM macro featuring optimized on-chip programming schemes is proposed to address the issues mentioned above.Different strategies of mapping weights to RRAM arrays are compared,and a novel direct-current ADC design is designed for both programming and inference stages.Utilizing the optimized hybrid programming scheme,4.67×programming speed,0.15×power saving and 4.31×compact weight distribution are realized.Besides,this macro achieves a normalized area efficiency of 2.82 TOPS/mm2 and a normalized energy efficiency of 35.6 TOPS/W.展开更多
With the widespread application of large language models(LLMs)in natural language processing and code generation,traditional High-Level Language Programming courses are facing unprecedented challenges and opportunitie...With the widespread application of large language models(LLMs)in natural language processing and code generation,traditional High-Level Language Programming courses are facing unprecedented challenges and opportunities.As a core programming language for computer science majors,C language remains irreplaceable due to its foundational nature and engineering adaptability.This paper,based on the rapid development of large model technologies,proposes a systematic reform design for C language teaching,focusing on teaching objectives,content structure,teaching methods,and evaluation systems.The article suggests a teaching framework centered on“human-computer collaborative programming,”integrating prompt training,AI-assisted debugging,and code generation analysis,aiming to enhance students’problem modeling ability,programming expression skills,and AI collaboration literacy.展开更多
More than seventy years before airplanes were invented,a twelve⁃year⁃old girl named Ada Lovelace dreamed of flying.She studied birds and experimented with materials to make wings,even writing a guide called Flyology.B...More than seventy years before airplanes were invented,a twelve⁃year⁃old girl named Ada Lovelace dreamed of flying.She studied birds and experimented with materials to make wings,even writing a guide called Flyology.But her curiosity didnt stop there.展开更多
基金financially supported by Ministerio de Ciencia e Innovación projects SAF2017-82736-C2-1-R to MTMFin Universidad Autónoma de Madrid and by Fundación Universidad Francisco de Vitoria to JS+2 种基金a predoctoral scholarship from Fundación Universidad Francisco de Vitoriafinancial support from a 6-month contract from Universidad Autónoma de Madrida 3-month contract from the School of Medicine of Universidad Francisco de Vitoria。
文摘Every year, around the world, between 250,000 and 500,000 people suffer a spinal cord injury(SCI). SCI is a devastating medical condition that arises from trauma or disease-induced damage to the spinal cord, disrupting the neural connections that allow communication between the brain and the rest of the body, which results in varying degrees of motor and sensory impairment. Disconnection in the spinal tracts is an irreversible condition owing to the poor capacity for spontaneous axonal regeneration in the affected neurons.
文摘Assessing the stability of pillars in underground mines(especially in deep underground mines)is a critical concern during both the design and the operational phases of a project.This study mainly focuses on developing two practical models to predict pillar stability status.For this purpose,two robust models were developed using a database including 236 case histories from seven underground hard rock mines,based on gene expression programming(GEP)and decision tree-support vector machine(DT-SVM)hybrid algorithms.The performance of the developed models was evaluated based on four common statistical criteria(sensitivity,specificity,Matthews correlation coefficient,and accuracy),receiver operating characteristic(ROC)curve,and testing data sets.The results showed that the GEP and DT-SVM models performed exceptionally well in assessing pillar stability,showing a high level of accuracy.The DT-SVM model,in particular,outperformed the GEP model(accuracy of 0.914,sensitivity of 0.842,specificity of 0.929,Matthews correlation coefficient of 0.767,and area under the ROC of 0.897 for the test data set).Furthermore,upon comparing the developed models with the previous ones,it was revealed that both models can effectively determine the condition of pillar stability with low uncertainty and acceptable accuracy.This suggests that these models could serve as dependable tools for project managers,aiding in the evaluation of pillar stability during the design and operational phases of mining projects,despite the inherent challenges in this domain.
基金partially supported by the National Natural Science Foundation of China(Grant Nos.42177164 and 52474121)the Distinguished Youth Science Foundation of Hunan Province of China(Grant No.2022JJ10073).
文摘As a critical component of the in situ stress state,determination of the minimum horizontal principal stress plays a significant role in both geotechnical and petroleum engineering.To this end,a gene expression programming(GEP)algorithm-based model,in which the data of borehole breakout size,vertical principal stress,and rock strength characteristics are used as the inputs,is proposed to predict the minimum horizontal principal stress.Seventy-nine(79)samples with seven features are collected to construct the minimum horizontal principal stress dataset used for training models.Twenty-four(24)GEP model hyperparameter sets were configured to explore the key parameter combinations among the inputs and their potential relationships with the minimum horizontal principal stresses.Model performance was evaluated using root mean squared error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE),and coefficient of determination(R^(2)).By comparing predictive performance and parameter composition,two models were selected from 24 GEP models that demonstrated excellent predictive performance and simpler parameter composition.Compared with prevalent models,the results indicate that the two selected GEP models have better performance on the test set(R^(2)=0.9568 and 0.9621).Additionally,the results conducted by SHapley Additive exPlanations(SHAP)sensitivity analysis and Local Interpretable Model-agnostic Explanations(LIME)demonstrate that the vertical principal stress is the most influential parameter in both GEP models.The two GEP models have simple parameter compositions as well as stable and excellent prediction performance,which is a viable method for predicting the minimum horizontal principal stresses.
基金supported by Canada First Research Excellence Fund,Medicine by Design(to CMM)。
文摘Over the last two decades,the dogma that cell fate is immutable has been increasingly challenged,with important implications for regenerative medicine.The brea kth rough discovery that induced pluripotent stem cells could be generated from adult mouse fibroblasts is powerful proof that cell fate can be changed.An exciting extension of the discovery of cell fate impermanence is the direct cellular reprogram ming hypothesis-that terminally differentiated cells can be reprogrammed into other adult cell fates without first passing through a stem cell state.
基金supported by National Institute on Aging(NIH-NIA)R21 AG074152(to KMA)National Institute of Allergy and Infectious Diseases(NIAID)grant DP2 AI171150(to KMA)Department of Defense(DoD)grant AZ210089(to KMA)。
文摘The brain's extracellular matrix(ECM),which is comprised of protein and glycosaminoglycan(GAG)scaffolds,constitutes 20%-40% of the human brain and is considered one of the largest influencers on brain cell functioning(Soles et al.,2023).Synthesized by neural and glial cells,the brain's ECM regulates a myriad of homeostatic cellular processes,including neuronal plasticity and firing(Miyata et al.,2012),cation buffering(Moraws ki et al.,2015),and glia-neuron interactions(Anderson et al.,2016).Considering the diversity of functions,dynamic remodeling of the brain's ECM indicates that this understudied medium is an active participant in both normal physiology and neurological diseases.
基金supported by the National Natural Science Foundation of China(No.62203256)。
文摘Generating dynamically feasible trajectory for fixed-wing Unmanned Aerial Vehicles(UAVs)in dense obstacle environments remains computationally intractable.This paper proposes a Safe Flight Corridor constrained Sequential Convex Programming(SFC-SCP)to improve the computation efficiency and reliability of trajectory generation.SFC-SCP combines the front-end convex polyhedron SFC construction and back-end SCP-based trajectory optimization.A Sparse A^(*)Search(SAS)driven SFC construction method is designed to efficiently generate polyhedron SFC according to the geometric relation among obstacles and collision-free waypoints.Via transforming the nonconvex obstacle-avoidance constraints to linear inequality constraints,SFC can mitigate infeasibility of trajectory planning and reduce computation complexity.Then,SCP casts the nonlinear trajectory optimization subject to SFC into convex programming subproblems to decrease the problem complexity.In addition,a convex optimizer based on interior point method is customized,where the search direction is calculated via successive elimination to further improve efficiency.Simulation experiments on dense obstacle scenarios show that SFC-SCP can generate dynamically feasible safe trajectory rapidly.Comparative studies with state-of-the-art SCP-based methods demonstrate the efficiency and reliability merits of SFC-SCP.Besides,the customized convex optimizer outperforms off-the-shelf optimizers in terms of computation time.
文摘This study proposes a novel approach to optimizing individual work schedules for book digitization using mixed-integer programming (MIP). By leveraging the power of MIP solvers, we aimed to minimize the overall digitization time while considering various constraints and process dependencies. The book digitization process involves three key steps: cutting, scanning, and binding. Each step has specific requirements and limitations such as the number of pages that can be processed simultaneously and potential bottlenecks. To address these complexities, we formulate the problem as a one-machine job shop scheduling problem with additional constraints to capture the unique characteristics of book digitization. We conducted a series of experiments to evaluate the performance of our proposed approach. By comparing the optimized schedules with the baseline approach, we demonstrated significant reductions in the overall processing time. In addition, we analyzed the impact of different weighting schemes on the optimization results, highlighting the importance of identifying and prioritizing critical processes. Our findings suggest that MIP-based optimization can be a valuable tool for improving the efficiency of individual work schedules, even in seemingly simple tasks, such as book digitization. By carefully considering specific constraints and objectives, we can save time and leverage resources by carefully considering specific constraints and objectives.
基金Education and Teaching Research Project of Beijing University of Technology(ER2024KCB08)。
文摘With the rapid development of artificial intelligence technology,AIGC(Artificial Intelligence-Generated Content)has triggered profound changes in the field of high-level language programming courses.This paper deeply explored the application principles,advantages,and limitations of AIGC in intelligent code generation,analyzed the new mode of human-computer collaboration in high-level language programming courses driven by AIGC,discussed the impact of human-computer collaboration on programming efficiency and code quality through practical case studies,and looks forward to future development trends.This research aims to provide theoretical and practical guidance for high-level language programming courses and promote innovative development of high-level language programming courses under the human-computer collaboration paradigm.
基金supported in part by the National Natural Science Foundation of China (62422405, 62025111,62495100, 92464302)the STI 2030-Major Projects(2021ZD0201200)+1 种基金the Shanghai Municipal Science and Technology Major Projectthe Beijing Advanced Innovation Center for Integrated Circuits
文摘Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CIM has the advantage of high computing density,non-volatility as well as high energy efficiency.However,previous CIM research has predominantly focused on realizing high energy efficiency and high area efficiency for inference,while little attention has been devoted to addressing the challenges of on-chip programming speed,power consumption,and accuracy.In this paper,a fabri-cated 28 nm 576K RRAM-based CIM macro featuring optimized on-chip programming schemes is proposed to address the issues mentioned above.Different strategies of mapping weights to RRAM arrays are compared,and a novel direct-current ADC design is designed for both programming and inference stages.Utilizing the optimized hybrid programming scheme,4.67×programming speed,0.15×power saving and 4.31×compact weight distribution are realized.Besides,this macro achieves a normalized area efficiency of 2.82 TOPS/mm2 and a normalized energy efficiency of 35.6 TOPS/W.
基金Education and Teaching Research Project of Beijing University of Technology(ER2024KCB08)。
文摘With the widespread application of large language models(LLMs)in natural language processing and code generation,traditional High-Level Language Programming courses are facing unprecedented challenges and opportunities.As a core programming language for computer science majors,C language remains irreplaceable due to its foundational nature and engineering adaptability.This paper,based on the rapid development of large model technologies,proposes a systematic reform design for C language teaching,focusing on teaching objectives,content structure,teaching methods,and evaluation systems.The article suggests a teaching framework centered on“human-computer collaborative programming,”integrating prompt training,AI-assisted debugging,and code generation analysis,aiming to enhance students’problem modeling ability,programming expression skills,and AI collaboration literacy.
文摘More than seventy years before airplanes were invented,a twelve⁃year⁃old girl named Ada Lovelace dreamed of flying.She studied birds and experimented with materials to make wings,even writing a guide called Flyology.But her curiosity didnt stop there.