Photonuclear data are increasingly used in fundamental nuclear research and technological applications.These data are generated using advanced γ-ray sources.The Shanghai laser electron gamma source(SLEGS)is a new las...Photonuclear data are increasingly used in fundamental nuclear research and technological applications.These data are generated using advanced γ-ray sources.The Shanghai laser electron gamma source(SLEGS)is a new laser Compton scattering γ-ray source at the Shanghai Synchrotron Radiation Facility.It delivers energy-tunable,quasi-monoenergetic gamma beams for high-precision photonuclear measurements.This paper presents the flat-efficiency detector(FED)array at SLEGS and its application in photoneutron cross-section measurements.Systematic uncertainties of the FED array were determined to be 3.02%through calibration with a ^(252)Cf neutron source.Using ^(197)Au and ^(159)Tb as representative nuclei,we demonstrate the format and processing methodology for raw photoneutron data.The results validate SLEGS’capability for high-precision photoneutron measurements.展开更多
The Data Market Management Strategy project proposes a comprehensive framework to harness AI technologies for optimizing data-driven decision-making processes.This framework,illustrated as an integrated ecosystem,unde...The Data Market Management Strategy project proposes a comprehensive framework to harness AI technologies for optimizing data-driven decision-making processes.This framework,illustrated as an integrated ecosystem,underscores the importance of data and model reuse through a structured marketplace environment.However,challenges such as data standardization,interoperability,and privacy concerns remain prevalent in current data markets.For instance,many data platforms still suffer from data silos and inconsistent metadata standards,making it difficult for researchers to efficiently access and reuse data across sectors.Addressing these issues,the proposed system integrates a data market and a model marketplace,facilitating seamless information exchange through Computing Cloud in Taiwan,China.Within this ecosystem,users can generate new models,upload,and share data,contributing to a dynamic and continuously evolving repository.The system enables users to access diverse datasets via standardized APIs and develop advanced models within modular containers such as Jupyter Notebooks.The model marketplace serves as a critical hub,supporting AI model sharing,refinement,and lifecycle management,fostering an environment where data and models are continuously reused.By emphasizing interdisciplinary collaboration,the framework enhances resource utilization,mitigates redundant efforts,and accelerates the development of novel AI solutions.The proposed approach aligns with global trends in federated learning,data privacy-preserving techniques,and open AI model hubs(e.g.,Hugging Face,TensorFlow Hub),ensuring ethical and secure data practices.Ultimately,the framework promotes scalable AI-powered applications,contributing to a more sustainable future in data management.展开更多
Filter bank multicarrier(FBMC)systems with offset quadrature amplitude modulation(OQAM)need long data blocks to achieve high spectral efficiency.However,the transmission of long data blocks in underwater acoustic(UWA)...Filter bank multicarrier(FBMC)systems with offset quadrature amplitude modulation(OQAM)need long data blocks to achieve high spectral efficiency.However,the transmission of long data blocks in underwater acoustic(UWA)communication systems often encounters the challenge of time-varying channels.This paper proposes a time-varying channel tracking method for short-range high-rate UWA FBMC-OQAM communication applications.First,a known preamble is used to initialize the channel estimation at the initial time of the signal block.Next,the estimated channel is applied to detect data symbols at several symbol periods.The detected data symbols are then reused as new pilots to estimate the next time channel.In the above steps,the unified transmission matrix model is extended to describe the time-varying channel input-output model in this paper and is used for symbol detection.Simulation results show that the channel tracking error can be reduced to less than−20 dB when the channel temporal coherence coefficient exceeds 0.75 within one block period of FBMC-OQAM signals.Compared with conventional known-pilot-based methods,the proposed method needs lower system overhead while exhibiting similar time-varying channel tracking performance.The sea trial results further proved the practicability of the proposed method.展开更多
With the rapid iteration of neural network algorithms, higher requirements were placed on the computational performance and memory access bandwidth of neural network accelerators. Simply increasing bandwidth cannot im...With the rapid iteration of neural network algorithms, higher requirements were placed on the computational performance and memory access bandwidth of neural network accelerators. Simply increasing bandwidth cannot improve energy efficiency, so improving the data reuse rate is a hot research topic. From the perspective of supporting data reuse, a reconfigurable convolutional neural network(CNN) accelerator based on elastic storage(RCAES) was designed in this paper. Supporting elastic memory access and flexible data flow reduces data movement between the processor and memory, eases the bandwidth pressure and enhances CNN acceleration performance. The experimental results indicate that by conducting 1×1 convolution and 3×3 convolution when performing convolution calculations, the execution speed increased by 25.00% and 61.61%, respectively. The 3×3 maximum pooling speed was increased by 76.04%.展开更多
In recent years,as newer technologies have evolved around the healthcare ecosystem,more and more data have been generated.Advanced analytics could power the data collected from numerous sources,both from healthcare in...In recent years,as newer technologies have evolved around the healthcare ecosystem,more and more data have been generated.Advanced analytics could power the data collected from numerous sources,both from healthcare institutions,or generated by individuals themselves via apps and devices,and lead to innovations in treatment and diagnosis of diseases;improve the care given to the patient;and empower citizens to participate in the decision-making process regarding their own health and well-being.However,the sensitive nature of the health data prohibits healthcare organizations from sharing the data.The Personal Health Train(PHT)is a novel approach,aiming to establish a distributed data analytics infrastructure enabling the(re)use of distributed healthcare data,while data owners stay in control of their own data.The main principle of the PHT is that data remain in their original location,and analytical tasks visit data sources and execute the tasks.The PHT provides a distributed,flexible approach to use data in a network of participants,incorporating the FAIR principles.It facilitates the responsible use of sensitive and/or personal data by adopting international principles and regulations.This paper presents the concepts and main components of the PHT and demonstrates how it complies with FAIR principles.展开更多
Easy access to data is one of the main avenues to accelerate scientific research.As a key element of scientific innovations,data sharing allows the reproduction of results and helps prevent data fabrication,falsificat...Easy access to data is one of the main avenues to accelerate scientific research.As a key element of scientific innovations,data sharing allows the reproduction of results and helps prevent data fabrication,falsification,and misuse.Although the research benefits from data reuse are widely acknowledged,the data collections existing today are still kept in silos.Indeed,monitoring what happens to data once they have been handed to a third party is currently not feasible within the current data sharing practices.We propose a blockchain-based system to trace data collections and potentially create a more trustworthy data sharing process.In this paper,we present the LUCE(License accoUntability and CompliancE)architecture as a decentralized blockchain-based platform supporting data sharing and reuse.LUCE is designed to provide full transparency on what happens to the data after they are shared with third parties.The contributions of this work consist of i)the design of a decentralized data sharing solution with accountability and compliance by design and ii)the inclusion of a dynamic consent model for personalized data sharing preferences and for enabling legal compliance mechanisms.We test the scalability of the platform in a real-time environment where a growing number of users access and reuse different datasets.Compared to existing data sharing solutions,LUCE provides transparency over data sharing practices,enables data reuse,and supports regulatory requirements.The experimentation shows that the platform can be scaled for a large number of users.展开更多
The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for ...The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for a computer system to use cache efficiently. Toward a dynamically reconfigurable cache proposed recently, DOOC (Data- Object Oriented Cache), this paper proposes a quantitative framework for analyzing the cache requirement of data-objects, which includes cache capacity, block size, associativity and coherence protocol. And a kind of graph coloring algorithm dealing with the competition between data-objects in the DOOC is proposed as well. Finally, we apply our approaches to the compiler management of DOOC. We test our approaches on both a single-core platform and a four-core platform. Compared with the traditional caches, the DOOC in both platforms achieves an average reduction of 44.98% and 49.69% in miss rate respectively. And its performance is very close to the ideal optimal cache.展开更多
Research Data Management(RDM)has become increasingly important for more and more academic institutions.Using the Peking University Open Research Data Repository(PKU-ORDR)project as an example,this paper will review a ...Research Data Management(RDM)has become increasingly important for more and more academic institutions.Using the Peking University Open Research Data Repository(PKU-ORDR)project as an example,this paper will review a library-based university-wide open research data repository project and related RDM services implementation process including project kickoff,needs assessment,partnerships establishment,software investigation and selection,software customization,as well as data curation services and training.Through the review,some issues revealed during the stages of the implementation process are also discussed and addressed in the paper such as awareness of research data,demands from data providers and users,data policies and requirements from home institution,requirements from funding agencies and publishers,the collaboration between administrative units and libraries,and concerns from data providers and users.The significance of the study is that the paper shows an example of creating an Open Data repository and RDM services for other Chinese academic libraries planning to implement their RDM services for their home institutions.The authors of the paper have also observed since the PKU-ORDR and RDM services implemented in 2015,the Peking University Library(PKUL)has helped numerous researchers to support the entire research life cycle and enhanced Open Science(OS)practices on campus,as well as impacted the national OS movement in China through various national events and activities hosted by the PKUL.展开更多
基金supported by National Key Research and Development Program of China(Nos.2022YFA1602404 and 2023YFA1606901)the National Natural Science Foundation of China(Nos.12275338,12388102,and U2441221)the Key Laboratory of Nuclear Data Foundation(JCKY2022201C152).
文摘Photonuclear data are increasingly used in fundamental nuclear research and technological applications.These data are generated using advanced γ-ray sources.The Shanghai laser electron gamma source(SLEGS)is a new laser Compton scattering γ-ray source at the Shanghai Synchrotron Radiation Facility.It delivers energy-tunable,quasi-monoenergetic gamma beams for high-precision photonuclear measurements.This paper presents the flat-efficiency detector(FED)array at SLEGS and its application in photoneutron cross-section measurements.Systematic uncertainties of the FED array were determined to be 3.02%through calibration with a ^(252)Cf neutron source.Using ^(197)Au and ^(159)Tb as representative nuclei,we demonstrate the format and processing methodology for raw photoneutron data.The results validate SLEGS’capability for high-precision photoneutron measurements.
基金the Science Council in Taiwan, China for their support and funding of this project (Grant number: 113-2221-E-492-016)
文摘The Data Market Management Strategy project proposes a comprehensive framework to harness AI technologies for optimizing data-driven decision-making processes.This framework,illustrated as an integrated ecosystem,underscores the importance of data and model reuse through a structured marketplace environment.However,challenges such as data standardization,interoperability,and privacy concerns remain prevalent in current data markets.For instance,many data platforms still suffer from data silos and inconsistent metadata standards,making it difficult for researchers to efficiently access and reuse data across sectors.Addressing these issues,the proposed system integrates a data market and a model marketplace,facilitating seamless information exchange through Computing Cloud in Taiwan,China.Within this ecosystem,users can generate new models,upload,and share data,contributing to a dynamic and continuously evolving repository.The system enables users to access diverse datasets via standardized APIs and develop advanced models within modular containers such as Jupyter Notebooks.The model marketplace serves as a critical hub,supporting AI model sharing,refinement,and lifecycle management,fostering an environment where data and models are continuously reused.By emphasizing interdisciplinary collaboration,the framework enhances resource utilization,mitigates redundant efforts,and accelerates the development of novel AI solutions.The proposed approach aligns with global trends in federated learning,data privacy-preserving techniques,and open AI model hubs(e.g.,Hugging Face,TensorFlow Hub),ensuring ethical and secure data practices.Ultimately,the framework promotes scalable AI-powered applications,contributing to a more sustainable future in data management.
基金Supported by the National Natural Science Foundation of China under Grant Nos.62171405,62225114 and 62101489.
文摘Filter bank multicarrier(FBMC)systems with offset quadrature amplitude modulation(OQAM)need long data blocks to achieve high spectral efficiency.However,the transmission of long data blocks in underwater acoustic(UWA)communication systems often encounters the challenge of time-varying channels.This paper proposes a time-varying channel tracking method for short-range high-rate UWA FBMC-OQAM communication applications.First,a known preamble is used to initialize the channel estimation at the initial time of the signal block.Next,the estimated channel is applied to detect data symbols at several symbol periods.The detected data symbols are then reused as new pilots to estimate the next time channel.In the above steps,the unified transmission matrix model is extended to describe the time-varying channel input-output model in this paper and is used for symbol detection.Simulation results show that the channel tracking error can be reduced to less than−20 dB when the channel temporal coherence coefficient exceeds 0.75 within one block period of FBMC-OQAM signals.Compared with conventional known-pilot-based methods,the proposed method needs lower system overhead while exhibiting similar time-varying channel tracking performance.The sea trial results further proved the practicability of the proposed method.
基金supported by the National Science and Technology Major Project (2022ZD0119001)the National Natural Science Foundation of China (61834005,61802304)the Shaanxi Provincial Key Research and Development Plan(2024GX-YBXM-100)。
文摘With the rapid iteration of neural network algorithms, higher requirements were placed on the computational performance and memory access bandwidth of neural network accelerators. Simply increasing bandwidth cannot improve energy efficiency, so improving the data reuse rate is a hot research topic. From the perspective of supporting data reuse, a reconfigurable convolutional neural network(CNN) accelerator based on elastic storage(RCAES) was designed in this paper. Supporting elastic memory access and flexible data flow reduces data movement between the processor and memory, eases the bandwidth pressure and enhances CNN acceleration performance. The experimental results indicate that by conducting 1×1 convolution and 3×3 convolution when performing convolution calculations, the execution speed increased by 25.00% and 61.61%, respectively. The 3×3 maximum pooling speed was increased by 76.04%.
文摘In recent years,as newer technologies have evolved around the healthcare ecosystem,more and more data have been generated.Advanced analytics could power the data collected from numerous sources,both from healthcare institutions,or generated by individuals themselves via apps and devices,and lead to innovations in treatment and diagnosis of diseases;improve the care given to the patient;and empower citizens to participate in the decision-making process regarding their own health and well-being.However,the sensitive nature of the health data prohibits healthcare organizations from sharing the data.The Personal Health Train(PHT)is a novel approach,aiming to establish a distributed data analytics infrastructure enabling the(re)use of distributed healthcare data,while data owners stay in control of their own data.The main principle of the PHT is that data remain in their original location,and analytical tasks visit data sources and execute the tasks.The PHT provides a distributed,flexible approach to use data in a network of participants,incorporating the FAIR principles.It facilitates the responsible use of sensitive and/or personal data by adopting international principles and regulations.This paper presents the concepts and main components of the PHT and demonstrates how it complies with FAIR principles.
基金This work was supported in part by the NWO Aspasia(Grant 91716421)by the Maastricht York-Partnership Grant。
文摘Easy access to data is one of the main avenues to accelerate scientific research.As a key element of scientific innovations,data sharing allows the reproduction of results and helps prevent data fabrication,falsification,and misuse.Although the research benefits from data reuse are widely acknowledged,the data collections existing today are still kept in silos.Indeed,monitoring what happens to data once they have been handed to a third party is currently not feasible within the current data sharing practices.We propose a blockchain-based system to trace data collections and potentially create a more trustworthy data sharing process.In this paper,we present the LUCE(License accoUntability and CompliancE)architecture as a decentralized blockchain-based platform supporting data sharing and reuse.LUCE is designed to provide full transparency on what happens to the data after they are shared with third parties.The contributions of this work consist of i)the design of a decentralized data sharing solution with accountability and compliance by design and ii)the inclusion of a dynamic consent model for personalized data sharing preferences and for enabling legal compliance mechanisms.We test the scalability of the platform in a real-time environment where a growing number of users access and reuse different datasets.Compared to existing data sharing solutions,LUCE provides transparency over data sharing practices,enables data reuse,and supports regulatory requirements.The experimentation shows that the platform can be scaled for a large number of users.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.60621003,60873014.
文摘The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for a computer system to use cache efficiently. Toward a dynamically reconfigurable cache proposed recently, DOOC (Data- Object Oriented Cache), this paper proposes a quantitative framework for analyzing the cache requirement of data-objects, which includes cache capacity, block size, associativity and coherence protocol. And a kind of graph coloring algorithm dealing with the competition between data-objects in the DOOC is proposed as well. Finally, we apply our approaches to the compiler management of DOOC. We test our approaches on both a single-core platform and a four-core platform. Compared with the traditional caches, the DOOC in both platforms achieves an average reduction of 44.98% and 49.69% in miss rate respectively. And its performance is very close to the ideal optimal cache.
文摘Research Data Management(RDM)has become increasingly important for more and more academic institutions.Using the Peking University Open Research Data Repository(PKU-ORDR)project as an example,this paper will review a library-based university-wide open research data repository project and related RDM services implementation process including project kickoff,needs assessment,partnerships establishment,software investigation and selection,software customization,as well as data curation services and training.Through the review,some issues revealed during the stages of the implementation process are also discussed and addressed in the paper such as awareness of research data,demands from data providers and users,data policies and requirements from home institution,requirements from funding agencies and publishers,the collaboration between administrative units and libraries,and concerns from data providers and users.The significance of the study is that the paper shows an example of creating an Open Data repository and RDM services for other Chinese academic libraries planning to implement their RDM services for their home institutions.The authors of the paper have also observed since the PKU-ORDR and RDM services implemented in 2015,the Peking University Library(PKUL)has helped numerous researchers to support the entire research life cycle and enhanced Open Science(OS)practices on campus,as well as impacted the national OS movement in China through various national events and activities hosted by the PKUL.