In this paper, we first overview some traditional relaying technologies, and then present a Network Coding-Aware Cooperative Relaying (NC2R) scheme to improve the performance of downlink transmission for relayaided ...In this paper, we first overview some traditional relaying technologies, and then present a Network Coding-Aware Cooperative Relaying (NC2R) scheme to improve the performance of downlink transmission for relayaided cellular networks. Moreover, systematic performance analysis and extensive simulations are performed for the proposed NC2R and traditional relaying and non-relaying schemes. The results show that NCR outperforms conventional relaying and non-relaying schemes in terms of blocking probability and spectral efficiency, especially for cell-edge users. Additionally, the location selections for relays with NCR are also discussed. These results will provide some insights for incorporating network coding into next-generation broadband cellular relay mobile systems.展开更多
While operators have started deploying fourth generation(4G) wireless communication systems,which could provide up to1 Gbps downlink peak data rate,the improved system capacity is still insufficient to meet the drasti...While operators have started deploying fourth generation(4G) wireless communication systems,which could provide up to1 Gbps downlink peak data rate,the improved system capacity is still insufficient to meet the drastically increasing demand of mobile users over the next decade.The main causes of the above-mentioned phenomenon include the following two aspects:1) the growth rate of the network capacity is far below that of user's demand,and 2) the relatively deterministic wireless access network(WAN) architecture in the existing systems cannot accommodate the prominent increase of mobile traffic with space-time domain dynamics.In order to address the above-mentioned challenges,we investigate the time-spatial consistency architecture for the future WAN,whilst emphasizing the critical roles of some spectral-efficient techniques such as Massive multiple-input multiple-output(MIMO),full-duplex(FD)operation and heterogeneous networks(HetNets).Furthermore,the energy efficiency(EE)of the HetNets under the proposed architecture is also evaluated,showing that the proposed user-selected uplink power control algorithm outperforms the traditional stochastic-scheduling strategy in terms of both capacity and EE in a two-tier HetNet.The other critical issues,including the tidal effect,the temporal failure owing to the instantaneously increased traffic,and the network wide load-balancing problem,etc.,are also anticipated to be addressed in the proposed architecture.(Abstract)展开更多
The single-carrier block transmission(SCBT),a.k.a.,single-carrier frequency-domain equalization(SC-FDE),is being considered as an option technique for the wireless personal area network(WPAN) operating at 60 GHz...The single-carrier block transmission(SCBT),a.k.a.,single-carrier frequency-domain equalization(SC-FDE),is being considered as an option technique for the wireless personal area network(WPAN) operating at 60 GHz.It is found that for residential environment,in non-line-of-sight(NLOS) multi-path channels,the SCBT is much more effective to combat the inter-symbol interference(ISI) compared with orthogonal frequency division multiplexing(OFDM).Low-density parity-check(LDPC) codes are a class of linear block codes which provide near capacity performance on a large collection of data transmission and storage channels while simultaneously admitting implementable decoders.To facilitate using LDPC codes for SCBT system,a new log-likelihood ratio(LLR) calculation method is proposed based on pilot symbols(PS).Golay Sequences whose sum autocorrelation has a unique peak and zero sidelobe are used for creating the PS.The position and length of the PS are not fixed in the data blocks.The simulation results show that the proposed method can significantly improve the LDPC decoding performance in SCBT system.This is very promising to support ultra high-data-rate wireless transmission.展开更多
Radio-frequency interference is a growing concern as wireless technology advances,with potentially life-threatening consequences like interference between radar altimeters and 5 G cellular networks.Mobile transceivers...Radio-frequency interference is a growing concern as wireless technology advances,with potentially life-threatening consequences like interference between radar altimeters and 5 G cellular networks.Mobile transceivers mix signals with varying ratios over time,posing challenges for conventional digital signal processing(DSP)due to its high latency.These challenges will worsen as future wireless technologies adopt higher carrier frequencies and data rates.However,conventional DSPs,already on the brink of their clock frequency limit,are expected to offer only marginal speed advancements.This paper introduces a photonic processor to address dynamic interference through blind source separation(BSS).Our system-on-chip processor employs a fully integrated photonic signal pathway in the analogue domain,enabling rapid demixing of received mixtures and recovering the signal-of-interest in under 15 picoseconds.This reduction in latency surpasses electronic counterparts by more than three orders of magnitude.To complement the photonic processor,electronic peripherals based on field-programmable gate array(FPGA)assess the effectiveness of demixing and continuously update demixing weights at a rate of up to 305 Hz.This compact setup features precise dithering weight control,impedance-controlled circuit board and optical fibre packaging,suitable for handheld and mobile scenarios.We experimentally demonstrate the processor’s ability to suppress transmission errors and maintain signal-to-noise ratios in two scenarios,radar altimeters and mobile communications.This work pioneers the real-time adaptability of integrated silicon photonics,enabling online learning and weight adjustments,and showcasing practical operational applications for photonic processing.展开更多
Efficiently creating a concise but comprehensive data set for training machine-learned interatomic potentials(MLIPs)is an under-explored problem.Active learning,which uses biased or unbiased molecular dynamics(MD)to g...Efficiently creating a concise but comprehensive data set for training machine-learned interatomic potentials(MLIPs)is an under-explored problem.Active learning,which uses biased or unbiased molecular dynamics(MD)to generate candidate pools,aims to address this objective.Existing biased and unbiased MD-simulation methods,however,are prone to miss either rare events or extrapolative regions—areas of the configurational space where unreliable predictions are made.This work demonstrates that MD,when biased by the MLIP’s energy uncertainty,simultaneously captures extrapolative regions and rare events,which is crucial for developing uniformly accurate MLIPs.Furthermore,exploiting automatic differentiation,we enhance bias-forces-driven MD with the concept of bias stress.展开更多
Parallel programs consist of series of code sections with different thread-level parallelism (TLP). As a result, it is rather common that a thread in a parallel program, such as a GPU kernel in CUDA programs, still ...Parallel programs consist of series of code sections with different thread-level parallelism (TLP). As a result, it is rather common that a thread in a parallel program, such as a GPU kernel in CUDA programs, still contains both sequential code and parallel loops. In order to leverage such parallel loops, the latest NVIDIA Kepler architecture introduces dynamic parallelism, which allows a GPU thread to start another GPU kernel, thereby reducing the overhead of launching kernels from a CPU. However, with dynamic parallelism, a parent thread can only communicate with its child threads through global memory and the overhead of launching GPU kernels is non-trivial even within GPUs. In this paper, we first study a set of GPGPU benchmarks that contain parallel loops, and highlight that these benchmarks do not have a very high loop count or high degree of TLP. Consequently, the benefits of leveraging such parallel loops using dynamic parallelism are too limited to offset its overhead. We then present our proposed solution to exploit nested parallelism in CUDA, referred to as CUDA-NP. With CUDA-NP, we initially enable a high number of threads when a GPU program starts, and use control flow to activate different numbers of threads for different code sections. We implement our proposed CUDA-NP framework using a directive-based compiler approach. For a GPU kernel, an application developer only needs to add OpenMP-like pragmas for parallelizable code sections. Then, our CUDA-NP compiler automatically generates the optimized GPU kernels. It supports both the reduction and the scan primitives, explores different ways to distribute parallel loop iterations into threads, and efficiently manages on-chip resource. Our experiments show that for a set of GPGPU benchmarks, which have already been optimized and contain nested parallelism, our proposed CUDA-NP framework further improves the performance by up to 6.69 times and 2.01 times on average.展开更多
Remotely sensing an object with light is essential for burgeoning technologies, such as autonomous vehicles. Here, an object's rotational orientation is remotely sensed using light's orbital angular momentum. An obj...Remotely sensing an object with light is essential for burgeoning technologies, such as autonomous vehicles. Here, an object's rotational orientation is remotely sensed using light's orbital angular momentum. An object is illuminated by and partially obstructs a Gaussian light beam. Using an SLM, the phase differences between the partially obstructed Gaussian light beam's constituent OAM modes are measured analogous to Stokes polar- imetry. It is shown that the phase differences are directly proportional to the object's rotational orientation. Comparison to the use of a pixelated camera and implementation in the millimeter wave regime are discussed.展开更多
A Cyber-Physical System (CPS) integrates physical devices (i.e.,sensors) with cyber (i.e.,informational) components to form a context sensitive system that responds intelligently to dynamic changes in real-world...A Cyber-Physical System (CPS) integrates physical devices (i.e.,sensors) with cyber (i.e.,informational) components to form a context sensitive system that responds intelligently to dynamic changes in real-world situations.Such a system has wide applications in the scenarios of traffic control,battlefield surveillance,environmental monitoring,and so on.A core element of CPS is the collection and assessment of information from noisy,dynamic,and uncertain physical environments integrated with many types of cyber-space resources.The potential of this integration is unbounded.To achieve this potential the raw data acquired from the physical world must be transformed into useable knowledge in real-time.Therefore,CPS brings a new dimension to knowledge discovery because of the emerging synergism of the physical and the cyber.The various properties of the physical world must be addressed in information management and knowledge discovery.This paper discusses the problems of mining sensor data in CPS:With a large number of wireless sensors deployed in a designated area,the task is real time detection of intruders that enter the area based on noisy sensor data.The framework of IntruMine is introduced to discover intruders from untrustworthy sensor data.IntruMine first analyzes the trustworthiness of sensor data,then detects the intruders' locations,and verifies the detections based on a graph model of the relationships between sensors and intruders.展开更多
In recent years, to maximize the value of software testing and analysis, we have proposed the methodology of cooperative software testing and analysis (in short as cooperative testing and analysis) to enable testing...In recent years, to maximize the value of software testing and analysis, we have proposed the methodology of cooperative software testing and analysis (in short as cooperative testing and analysis) to enable testing and analysis tools to cooperate with their users (in the form of tool-human cooperation), and enable one tool to cooperate with another tool (in the form of tool-tool cooperation). Such cooperations are motivated by the observation that a tool is typically not powerful enough to address complications in testing or analysis of complex real-world software, and the tool user or another tool may be able to help out some problems faced by the tool. To enable tool-human or tool-tool cooperation, effective mechanisms need to be developed 1) for a tool to communicate problems faced by the tool to the tool user or another tool, and 2) for the tool user or another tool to assist the tool to address the problems. Such methodology of cooperative testing and analysis forms a new research frontier on synergistic cooperations between humans and tools along with cooperations between tools and tools. This article presents recent example advances and challenges on cooperative testing and analysis.展开更多
We propose a cascade system of filters for realizing a non-uniform waveband separation for optical networks. The use of such separation is required at the DEMUX stage in a optical OXC switching wavebands. The design o...We propose a cascade system of filters for realizing a non-uniform waveband separation for optical networks. The use of such separation is required at the DEMUX stage in a optical OXC switching wavebands. The design of the system is based on optimized balanced tree, which minimizes the overall optical loss.展开更多
Urban flooding is becoming a common and devastating hazard,which causes life loss and economic damage.Monitoring and understanding urban flooding in a highly localized scale is a challenging task due to the complicate...Urban flooding is becoming a common and devastating hazard,which causes life loss and economic damage.Monitoring and understanding urban flooding in a highly localized scale is a challenging task due to the complicated urban landscape,intri-cate hydraulic process,and the lack of high-quality and resolution data.The emerging smart city technology such as monitoring cameras provides an unprecedented opportunity to address the data issue.However,estimating water ponding extents on land surfaces based on monitoring footage is unreliable using the tradi-tional segmentation technique because the boundary of the water ponding,under the influence of varying weather,background,and ilumination,is usually too fuzzy to identify,and the oblique angle and image distortion in the video monitoring data prevents geor-eferencing and object-based measurements.This paper presents a novel semi-supervised segmentation scheme for surface water extent recognition from the footage of an oblique monitoring camera.The semi-supervised segmentation algorithm was found suitable to determine the water boundary and the monoplotting method was successfully applied to georeference the pixels of the monitoring video for the virtual quantification of the local drainage process.The correlation and mechanism-based analysis demon-strate the value of the proposed method in advancing the under-standing of local drainage hydraulics.The workflow and created methods in this study have a great potential to study other street-level and earth surface processes.展开更多
For this special section on software systems, six research leaders in software systems, as vip editors tor this special section, discuss important issues that will shape this field's future research directions. The...For this special section on software systems, six research leaders in software systems, as vip editors tor this special section, discuss important issues that will shape this field's future research directions. The essays included in this roundtable article cover research opportunities and challenges for large-scale software systems such as querying organization- wide software behaviors (Xusheng Xiao), logging and log analysis (Jian-Ouang Lou), engineering reliable cloud distributed systems (Shan Lu), usage data (David C. Shepherd), clone detection and management (Xin Peng), and code search and beyond (Qian-Xiang Wang). - Tao Xie, Leading Editor of Software Systems.展开更多
Chemically complex multicomponent alloys possess exceptional properties derived from an inexhaustible compositional space.The complexity however makes interatomic potential development challenging.We explore two compl...Chemically complex multicomponent alloys possess exceptional properties derived from an inexhaustible compositional space.The complexity however makes interatomic potential development challenging.We explore two complementary machine-learned potentials—the moment tensor potential(MTP)and the Gaussian moment neural network(GM-NN)—in simultaneously describing configurational and vibrational degrees of freedom in the Ta-V-Cr-W alloy family.Both models are equally accurate with excellent performance evaluated against density-functional-theory.They achieve root-mean-square-errors(RMSEs)in energies of less than a few meV/atom across 0 K ordered and high-temperature disordered configurations included in the training.Even for compositions not in training,relative energy RMSEs at high temperatures are within a few meV/atom.High-temperature molecular dynamics forces have similarly small RMSEs of about 0.15 eV/Åfor the disordered quaternary included in,and ternaries not part of training.MTPs achieve faster convergence with training size;GM-NNs are faster in execution.Active learning is partially beneficial and should be complemented with conventional human-based training set generation.展开更多
基金supported by the State Key Laboratory of Rail Traffic Control and Safety,Beijing Jiaotong University under Grant No.RCS2012ZT008the National Key Basic Research Program of China(973Program)under Grant No.2012CB316100(2)+1 种基金the National Natural Science Foundation of China under Grants No.61201203,No.61171064the Fundamental Research Funds for the Central Universities under Grant No.2012JBM030
文摘In this paper, we first overview some traditional relaying technologies, and then present a Network Coding-Aware Cooperative Relaying (NC2R) scheme to improve the performance of downlink transmission for relayaided cellular networks. Moreover, systematic performance analysis and extensive simulations are performed for the proposed NC2R and traditional relaying and non-relaying schemes. The results show that NCR outperforms conventional relaying and non-relaying schemes in terms of blocking probability and spectral efficiency, especially for cell-edge users. Additionally, the location selections for relays with NCR are also discussed. These results will provide some insights for incorporating network coding into next-generation broadband cellular relay mobile systems.
基金supported by the key project of the National Natural Science Foundation of China(No.61431001)the 863 project No.2014AA01A701+4 种基金Program for New Century Excellent Talents in University(NECT12-0774)the open research fund of National Mobile Communications Research Laboratory Southeast University(No.2013D12)Fundamental Research Funds for the Central Universities(FRF-BD-15-012A)the Research Foundation of China Mobilethe Foundation of Beijing Engineering and Technology Center for Convergence Networks and Ubiquitous Services
文摘While operators have started deploying fourth generation(4G) wireless communication systems,which could provide up to1 Gbps downlink peak data rate,the improved system capacity is still insufficient to meet the drastically increasing demand of mobile users over the next decade.The main causes of the above-mentioned phenomenon include the following two aspects:1) the growth rate of the network capacity is far below that of user's demand,and 2) the relatively deterministic wireless access network(WAN) architecture in the existing systems cannot accommodate the prominent increase of mobile traffic with space-time domain dynamics.In order to address the above-mentioned challenges,we investigate the time-spatial consistency architecture for the future WAN,whilst emphasizing the critical roles of some spectral-efficient techniques such as Massive multiple-input multiple-output(MIMO),full-duplex(FD)operation and heterogeneous networks(HetNets).Furthermore,the energy efficiency(EE)of the HetNets under the proposed architecture is also evaluated,showing that the proposed user-selected uplink power control algorithm outperforms the traditional stochastic-scheduling strategy in terms of both capacity and EE in a two-tier HetNet.The other critical issues,including the tidal effect,the temporal failure owing to the instantaneously increased traffic,and the network wide load-balancing problem,etc.,are also anticipated to be addressed in the proposed architecture.(Abstract)
基金supported by the National Natural Science Foundation of China (60572093)Specialized Research Fund for the DoctoralProgram of Higher Education (20050004016)
文摘The single-carrier block transmission(SCBT),a.k.a.,single-carrier frequency-domain equalization(SC-FDE),is being considered as an option technique for the wireless personal area network(WPAN) operating at 60 GHz.It is found that for residential environment,in non-line-of-sight(NLOS) multi-path channels,the SCBT is much more effective to combat the inter-symbol interference(ISI) compared with orthogonal frequency division multiplexing(OFDM).Low-density parity-check(LDPC) codes are a class of linear block codes which provide near capacity performance on a large collection of data transmission and storage channels while simultaneously admitting implementable decoders.To facilitate using LDPC codes for SCBT system,a new log-likelihood ratio(LLR) calculation method is proposed based on pilot symbols(PS).Golay Sequences whose sum autocorrelation has a unique peak and zero sidelobe are used for creating the PS.The position and length of the PS are not fixed in the data blocks.The simulation results show that the proposed method can significantly improve the LDPC decoding performance in SCBT system.This is very promising to support ultra high-data-rate wireless transmission.
基金supported by the National Science Foundation(NSF)(ECCS-2128616 and ECCS-1642962 to P.R.P.)the Office of Naval Research(ONR)(N00014-18-1-2297 and N00014-20-1-2664 P.R.P.)the Defense Advanced Research Projects Agency(HR00111990049 to P.R.P.)。
文摘Radio-frequency interference is a growing concern as wireless technology advances,with potentially life-threatening consequences like interference between radar altimeters and 5 G cellular networks.Mobile transceivers mix signals with varying ratios over time,posing challenges for conventional digital signal processing(DSP)due to its high latency.These challenges will worsen as future wireless technologies adopt higher carrier frequencies and data rates.However,conventional DSPs,already on the brink of their clock frequency limit,are expected to offer only marginal speed advancements.This paper introduces a photonic processor to address dynamic interference through blind source separation(BSS).Our system-on-chip processor employs a fully integrated photonic signal pathway in the analogue domain,enabling rapid demixing of received mixtures and recovering the signal-of-interest in under 15 picoseconds.This reduction in latency surpasses electronic counterparts by more than three orders of magnitude.To complement the photonic processor,electronic peripherals based on field-programmable gate array(FPGA)assess the effectiveness of demixing and continuously update demixing weights at a rate of up to 305 Hz.This compact setup features precise dithering weight control,impedance-controlled circuit board and optical fibre packaging,suitable for handheld and mobile scenarios.We experimentally demonstrate the processor’s ability to suppress transmission errors and maintain signal-to-noise ratios in two scenarios,radar altimeters and mobile communications.This work pioneers the real-time adaptability of integrated silicon photonics,enabling online learning and weight adjustments,and showcasing practical operational applications for photonic processing.
基金Funded by Deutsche Forschungsgemeinschaft(DFG,German Research Foundation)under Germany’s Excellence Strategy-EXC 2075-390740016。
文摘Efficiently creating a concise but comprehensive data set for training machine-learned interatomic potentials(MLIPs)is an under-explored problem.Active learning,which uses biased or unbiased molecular dynamics(MD)to generate candidate pools,aims to address this objective.Existing biased and unbiased MD-simulation methods,however,are prone to miss either rare events or extrapolative regions—areas of the configurational space where unreliable predictions are made.This work demonstrates that MD,when biased by the MLIP’s energy uncertainty,simultaneously captures extrapolative regions and rare events,which is crucial for developing uniformly accurate MLIPs.Furthermore,exploiting automatic differentiation,we enhance bias-forces-driven MD with the concept of bias stress.
基金This work was supported by the National Science Foundation of USA under Grant No. CCF-1216569 and a CAREER award of National Science Foundation of USA under Grant No. CCF-0968667.
文摘Parallel programs consist of series of code sections with different thread-level parallelism (TLP). As a result, it is rather common that a thread in a parallel program, such as a GPU kernel in CUDA programs, still contains both sequential code and parallel loops. In order to leverage such parallel loops, the latest NVIDIA Kepler architecture introduces dynamic parallelism, which allows a GPU thread to start another GPU kernel, thereby reducing the overhead of launching kernels from a CPU. However, with dynamic parallelism, a parent thread can only communicate with its child threads through global memory and the overhead of launching GPU kernels is non-trivial even within GPUs. In this paper, we first study a set of GPGPU benchmarks that contain parallel loops, and highlight that these benchmarks do not have a very high loop count or high degree of TLP. Consequently, the benefits of leveraging such parallel loops using dynamic parallelism are too limited to offset its overhead. We then present our proposed solution to exploit nested parallelism in CUDA, referred to as CUDA-NP. With CUDA-NP, we initially enable a high number of threads when a GPU program starts, and use control flow to activate different numbers of threads for different code sections. We implement our proposed CUDA-NP framework using a directive-based compiler approach. For a GPU kernel, an application developer only needs to add OpenMP-like pragmas for parallelizable code sections. Then, our CUDA-NP compiler automatically generates the optimized GPU kernels. It supports both the reduction and the scan primitives, explores different ways to distribute parallel loop iterations into threads, and efficiently manages on-chip resource. Our experiments show that for a set of GPGPU benchmarks, which have already been optimized and contain nested parallelism, our proposed CUDA-NP framework further improves the performance by up to 6.69 times and 2.01 times on average.
文摘Remotely sensing an object with light is essential for burgeoning technologies, such as autonomous vehicles. Here, an object's rotational orientation is remotely sensed using light's orbital angular momentum. An object is illuminated by and partially obstructs a Gaussian light beam. Using an SLM, the phase differences between the partially obstructed Gaussian light beam's constituent OAM modes are measured analogous to Stokes polar- imetry. It is shown that the phase differences are directly proportional to the object's rotational orientation. Comparison to the use of a pixelated camera and implementation in the millimeter wave regime are discussed.
基金supported in part by the U.S. Army Research Laboratory under Cooperative Agreement Nos. W911NF-09-2-0053 (NS-CTA) and W911NF-11-20086 (Cyber-Security)the U.S. Army Research Office under Cooperative Agreement No. W911NF-13-1-0193, DTRAU.S. National Science Foundation grants CNS-0931975, IIS-1017362, IIS-1320617, and IIS1354329
文摘A Cyber-Physical System (CPS) integrates physical devices (i.e.,sensors) with cyber (i.e.,informational) components to form a context sensitive system that responds intelligently to dynamic changes in real-world situations.Such a system has wide applications in the scenarios of traffic control,battlefield surveillance,environmental monitoring,and so on.A core element of CPS is the collection and assessment of information from noisy,dynamic,and uncertain physical environments integrated with many types of cyber-space resources.The potential of this integration is unbounded.To achieve this potential the raw data acquired from the physical world must be transformed into useable knowledge in real-time.Therefore,CPS brings a new dimension to knowledge discovery because of the emerging synergism of the physical and the cyber.The various properties of the physical world must be addressed in information management and knowledge discovery.This paper discusses the problems of mining sensor data in CPS:With a large number of wireless sensors deployed in a designated area,the task is real time detection of intruders that enter the area based on noisy sensor data.The framework of IntruMine is introduced to discover intruders from untrustworthy sensor data.IntruMine first analyzes the trustworthiness of sensor data,then detects the intruders' locations,and verifies the detections based on a graph model of the relationships between sensors and intruders.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.61228203,61225007,and 61272157the National Science Foundation of USA under Grant Nos.CCF-1349666,CNS-1434582,CCF-1434596,CCF-1434590,CNS-1439481a Microsoft Research award
文摘In recent years, to maximize the value of software testing and analysis, we have proposed the methodology of cooperative software testing and analysis (in short as cooperative testing and analysis) to enable testing and analysis tools to cooperate with their users (in the form of tool-human cooperation), and enable one tool to cooperate with another tool (in the form of tool-tool cooperation). Such cooperations are motivated by the observation that a tool is typically not powerful enough to address complications in testing or analysis of complex real-world software, and the tool user or another tool may be able to help out some problems faced by the tool. To enable tool-human or tool-tool cooperation, effective mechanisms need to be developed 1) for a tool to communicate problems faced by the tool to the tool user or another tool, and 2) for the tool user or another tool to assist the tool to address the problems. Such methodology of cooperative testing and analysis forms a new research frontier on synergistic cooperations between humans and tools along with cooperations between tools and tools. This article presents recent example advances and challenges on cooperative testing and analysis.
文摘We propose a cascade system of filters for realizing a non-uniform waveband separation for optical networks. The use of such separation is required at the DEMUX stage in a optical OXC switching wavebands. The design of the system is based on optimized balanced tree, which minimizes the overall optical loss.
基金supported by the U.s.Department of Transportation,Offce of the Assistant Secretary for Research and Technology(OST-R),Grant No.69A3551847102issued to Rutgers,The State University of New Jersey.The authors thank the kind support of Dr.Ting Wang from the NEC Labs in this work.
文摘Urban flooding is becoming a common and devastating hazard,which causes life loss and economic damage.Monitoring and understanding urban flooding in a highly localized scale is a challenging task due to the complicated urban landscape,intri-cate hydraulic process,and the lack of high-quality and resolution data.The emerging smart city technology such as monitoring cameras provides an unprecedented opportunity to address the data issue.However,estimating water ponding extents on land surfaces based on monitoring footage is unreliable using the tradi-tional segmentation technique because the boundary of the water ponding,under the influence of varying weather,background,and ilumination,is usually too fuzzy to identify,and the oblique angle and image distortion in the video monitoring data prevents geor-eferencing and object-based measurements.This paper presents a novel semi-supervised segmentation scheme for surface water extent recognition from the footage of an oblique monitoring camera.The semi-supervised segmentation algorithm was found suitable to determine the water boundary and the monoplotting method was successfully applied to georeference the pixels of the monitoring video for the virtual quantification of the local drainage process.The correlation and mechanism-based analysis demon-strate the value of the proposed method in advancing the under-standing of local drainage hydraulics.The workflow and created methods in this study have a great potential to study other street-level and earth surface processes.
文摘For this special section on software systems, six research leaders in software systems, as vip editors tor this special section, discuss important issues that will shape this field's future research directions. The essays included in this roundtable article cover research opportunities and challenges for large-scale software systems such as querying organization- wide software behaviors (Xusheng Xiao), logging and log analysis (Jian-Ouang Lou), engineering reliable cloud distributed systems (Shan Lu), usage data (David C. Shepherd), clone detection and management (Xin Peng), and code search and beyond (Qian-Xiang Wang). - Tao Xie, Leading Editor of Software Systems.
基金This project has received funding from the European Research Council(ERC)under the European Union’s Horizon 2020 research and innovation programme(grant agreement No.865855)The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation(DFG)through grant No.INST 40/575-1 FUGG(JUSTUS 2 cluster)+5 种基金We thank the Deutsche Forschungsgemeinschaft(DFG,German Research Foundation)for supporting this work by funding-EXC2075-390740016 under Germany’s Excellence StrategyWe acknowledge the support by the Stuttgart Center for Simulation Science(SimTech)K.G.and B.G.acknowledge support from the collaborative DFG-RFBR Grant(Grants No.DFG KO 5080/3-1,DFG GR 3716/6-1)K.G.also acknowledges the Deutsche Forschungsgemeinschaft(DFG,German Research Foundation)-Project-ID 358283783-SFB 1333/22022V.Z.acknowledges financial support received in the form of a PhD scholarship from the Studienstiftung des Deutschen Volkes(German National Academic Foundation)P.S.would like to thank the Alexander von Humboldt Foundation for their support through the Alexander von Humboldt Postdoctoral Fellowship Program.A.D.acknowledges support through EPSRC grant EP/S032835/1.
文摘Chemically complex multicomponent alloys possess exceptional properties derived from an inexhaustible compositional space.The complexity however makes interatomic potential development challenging.We explore two complementary machine-learned potentials—the moment tensor potential(MTP)and the Gaussian moment neural network(GM-NN)—in simultaneously describing configurational and vibrational degrees of freedom in the Ta-V-Cr-W alloy family.Both models are equally accurate with excellent performance evaluated against density-functional-theory.They achieve root-mean-square-errors(RMSEs)in energies of less than a few meV/atom across 0 K ordered and high-temperature disordered configurations included in the training.Even for compositions not in training,relative energy RMSEs at high temperatures are within a few meV/atom.High-temperature molecular dynamics forces have similarly small RMSEs of about 0.15 eV/Åfor the disordered quaternary included in,and ternaries not part of training.MTPs achieve faster convergence with training size;GM-NNs are faster in execution.Active learning is partially beneficial and should be complemented with conventional human-based training set generation.