The BaBer experiment stores its data in an Object oriented federated database supplied by Objectivity/DB(tm),this database is surrently 350TB in size and is expected to increase considerably as the experiment matures....The BaBer experiment stores its data in an Object oriented federated database supplied by Objectivity/DB(tm),this database is surrently 350TB in size and is expected to increase considerably as the experiment matures.Management of this database requires careful planning and specialized tools in order to make the data available to physicists in an efficient and timely manner,We discuss the operational issues and management tools that were developed during the previous run to deal with this vast quantity of data at SLAC.展开更多
Providing efficient access to more than 300TB of experiment data is the responsibility of the BaBar^1 Databases Group.Unlike generic tools,The Event Browser presents users with an abstraction of the BaBar data model.M...Providing efficient access to more than 300TB of experiment data is the responsibility of the BaBar^1 Databases Group.Unlike generic tools,The Event Browser presents users with an abstraction of the BaBar data model.Multithreaded CORBA^2 servers perform database operations using small transactions in an effort to avoid lock contention issues and provide adequate response times.The GUI client is implemented in Java and can be easily deployed throughout the community in the form of a web applet.The browser allows users to examine collections of related physics events and identify associations between the collections and the physical files in which they reside,helping administrators distribute data to other sites worldwide,This paper discusses the various aspects of the Event Browser including requirements,design challenges and key features of the current implementation.展开更多
The BaBar Experiment collected around 20 TB of data during its first 6 months of running.Now,after 18 months,data size exceeds 300 TB,and according to prognosis,it is a small fraction of the size of data coming in the...The BaBar Experiment collected around 20 TB of data during its first 6 months of running.Now,after 18 months,data size exceeds 300 TB,and according to prognosis,it is a small fraction of the size of data coming in the next few months,In order to keep up with the data significant effort was put into tuning the database system,It led to great performance improvements,as well as to inevitable system expansion-450 simultaneous processing nodes alone used for data reconstruction.It is believed,that further growth beyond 600 nodes will happen soon.In such an environment,many complex operations are executed simultaneously on hundreds of machines,putting a huge load on data servers and increasing network traffic Introducing two CORBA servers halved startup time,and dramatically offloaded database servers:data servers as well as lock servers The paper describes details of design and implementation of two servers recently in troduced in the Babar system:conditions OID server and Clustering Server,The first experience of using these servers is discussed.A discussion on a Collection Server for data analysis,currently being designed is included.展开更多
After less than a year of operation ,the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB.Around 20 TB of data has been exported via the Internet to t...After less than a year of operation ,the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB.Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon,France,and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory(LLNL),BaBar Collaborators plan to double data collection each year and export a third of the data to IN2P3.So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone.Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical.In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed.Methods for achieving the ambitious requirements will be discussed.展开更多
Understanding how the Internet is used by HEP is critical to optimizing the performance of the inter-lab computing environment.Typically use requirements have been defined by discussions between collaborators.However,...Understanding how the Internet is used by HEP is critical to optimizing the performance of the inter-lab computing environment.Typically use requirements have been defined by discussions between collaborators.However,later analysis of the actual traffic has shown this is often misunderstood and actual use is significantly different to that predicted.Passive monitoring of the real traffic provides insight into the true communications requirements and the performance of a large number of a large number of inter-communicating nodes.It may be useful in identifying performance problems that are due to factors other than Internet congestion especially when compared to other methods such as active monitoring where traffic is generated specifically to measure its performance.Controlled active monitoring between dedicated servers often gives an indication of what can be achieved on a network,Passive monitoring of the real traffic gives a picture of the true performance.This paper will discuss the method and results of collecting and analyzing flows of data obtained from the SLAC Internet border,The unique nature of HEP traffic and the needs of the HEP community will be highlighted.The insights this has brought to understanding the network will be reviewed and the benefit is can bring to engineering networks will be discussed.展开更多
The importance of the Internet to modern High Energy Physics Collaborators is clearly immense,and understanding how new developments in network technology impact networks is critical to the future design of experiment...The importance of the Internet to modern High Energy Physics Collaborators is clearly immense,and understanding how new developments in network technology impact networks is critical to the future design of experiments.The next generation Internet Protocol(IPv6) is being deployed on testbeds and production networks throughout the world.The protocol has been designed to solve todays internet problems,and many of the features will be core Internet services in the future.In this talk the features of the protocol will be described.Details will be given on the deployment at sites important to High Energy Physics Research and the network services operating at these sites,In particular IPv6 deployment on the U.S.Energy Sciences Network(ESnet)will be reviewed.The connectivity and performance between High Energy Physics Laboratories,Universities and Institutes will be discussed.展开更多
The BABAR database,based upon the Objectivity OO database management system,has been in production since early 1999,It has met its initial design requirements which were to accommodate a 100Hz event rate from the expe...The BABAR database,based upon the Objectivity OO database management system,has been in production since early 1999,It has met its initial design requirements which were to accommodate a 100Hz event rate from the experiment at a scale of 200TB per year.However,with increased luminosity and changes in the physics requirements,these requirements have increased significantly for the current running period and will again increase in the future.New capabilities in the underlying ODBMS product,in particular those of multiple federation and read-only database support,have been incorporated into a new design that is backwards compatible with existing application code while offering scaling into the multi-petabyte size regime.Other optimizations,including the increased use of thghtly coupled CORBA servers and an improved awareness of space inefficiencies,are also playing a part in meeting the new scaling requirements.We discuss these optimizations and the prospects for further scaling enhancements to address the longer-term needs of the experiment.展开更多
文摘The BaBer experiment stores its data in an Object oriented federated database supplied by Objectivity/DB(tm),this database is surrently 350TB in size and is expected to increase considerably as the experiment matures.Management of this database requires careful planning and specialized tools in order to make the data available to physicists in an efficient and timely manner,We discuss the operational issues and management tools that were developed during the previous run to deal with this vast quantity of data at SLAC.
文摘Providing efficient access to more than 300TB of experiment data is the responsibility of the BaBar^1 Databases Group.Unlike generic tools,The Event Browser presents users with an abstraction of the BaBar data model.Multithreaded CORBA^2 servers perform database operations using small transactions in an effort to avoid lock contention issues and provide adequate response times.The GUI client is implemented in Java and can be easily deployed throughout the community in the form of a web applet.The browser allows users to examine collections of related physics events and identify associations between the collections and the physical files in which they reside,helping administrators distribute data to other sites worldwide,This paper discusses the various aspects of the Event Browser including requirements,design challenges and key features of the current implementation.
文摘The BaBar Experiment collected around 20 TB of data during its first 6 months of running.Now,after 18 months,data size exceeds 300 TB,and according to prognosis,it is a small fraction of the size of data coming in the next few months,In order to keep up with the data significant effort was put into tuning the database system,It led to great performance improvements,as well as to inevitable system expansion-450 simultaneous processing nodes alone used for data reconstruction.It is believed,that further growth beyond 600 nodes will happen soon.In such an environment,many complex operations are executed simultaneously on hundreds of machines,putting a huge load on data servers and increasing network traffic Introducing two CORBA servers halved startup time,and dramatically offloaded database servers:data servers as well as lock servers The paper describes details of design and implementation of two servers recently in troduced in the Babar system:conditions OID server and Clustering Server,The first experience of using these servers is discussed.A discussion on a Collection Server for data analysis,currently being designed is included.
文摘After less than a year of operation ,the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB.Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon,France,and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory(LLNL),BaBar Collaborators plan to double data collection each year and export a third of the data to IN2P3.So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone.Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical.In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed.Methods for achieving the ambitious requirements will be discussed.
文摘Understanding how the Internet is used by HEP is critical to optimizing the performance of the inter-lab computing environment.Typically use requirements have been defined by discussions between collaborators.However,later analysis of the actual traffic has shown this is often misunderstood and actual use is significantly different to that predicted.Passive monitoring of the real traffic provides insight into the true communications requirements and the performance of a large number of a large number of inter-communicating nodes.It may be useful in identifying performance problems that are due to factors other than Internet congestion especially when compared to other methods such as active monitoring where traffic is generated specifically to measure its performance.Controlled active monitoring between dedicated servers often gives an indication of what can be achieved on a network,Passive monitoring of the real traffic gives a picture of the true performance.This paper will discuss the method and results of collecting and analyzing flows of data obtained from the SLAC Internet border,The unique nature of HEP traffic and the needs of the HEP community will be highlighted.The insights this has brought to understanding the network will be reviewed and the benefit is can bring to engineering networks will be discussed.
文摘The importance of the Internet to modern High Energy Physics Collaborators is clearly immense,and understanding how new developments in network technology impact networks is critical to the future design of experiments.The next generation Internet Protocol(IPv6) is being deployed on testbeds and production networks throughout the world.The protocol has been designed to solve todays internet problems,and many of the features will be core Internet services in the future.In this talk the features of the protocol will be described.Details will be given on the deployment at sites important to High Energy Physics Research and the network services operating at these sites,In particular IPv6 deployment on the U.S.Energy Sciences Network(ESnet)will be reviewed.The connectivity and performance between High Energy Physics Laboratories,Universities and Institutes will be discussed.
文摘The BABAR database,based upon the Objectivity OO database management system,has been in production since early 1999,It has met its initial design requirements which were to accommodate a 100Hz event rate from the experiment at a scale of 200TB per year.However,with increased luminosity and changes in the physics requirements,these requirements have increased significantly for the current running period and will again increase in the future.New capabilities in the underlying ODBMS product,in particular those of multiple federation and read-only database support,have been incorporated into a new design that is backwards compatible with existing application code while offering scaling into the multi-petabyte size regime.Other optimizations,including the increased use of thghtly coupled CORBA servers and an improved awareness of space inefficiencies,are also playing a part in meeting the new scaling requirements.We discuss these optimizations and the prospects for further scaling enhancements to address the longer-term needs of the experiment.