Open Call for Proposals: April 4-November 24, 2021
Internet2 Community Voices Series
Join Us to Learn, Share, Connect
The Internet2 Community Voices Series provides the opportunity to hear from experts, learn from their research, and connect with the community each month beginning in May 2021. Register for the August 12th event.
Survey says: It’s time to reconnect. Responses from a recent Internet2 community survey highlight the need to bring the community together to share expertise and spark conversations around emerging challenges.
The Community Voices Series features the latest research from community experts and innovators, their lessons learned, and the chance to connect with the community. Each talk will be a stand-alone event, with a registration to access the live talk and the post-event recording and supporting materials. Note: We are highlighting the next two Community Voices events on this page. Look for later events on the I2 Online listings!
|Event title:||Network Routing Security at Scale|
|Date, Time:||Aug. 12, 1 p.m. ET|
|Description:||Information and Technology Services (ITS) provides and maintains the University of Michigan’s network (UMnet). UMnet is the core unifying technology connecting all schools, colleges, and institutes to each other and the internet. Further, it is the critical core technology enabler for the multitude of technologies U-M’s various missions depend on to operate.|
UMnet directly connects to the internet with multiple redundant 100Gbps connections, offering U-M immense flexibility and the opportunity to collaborate with anyone, anywhere. However, no cost-effective solutions are available today to protect the network and so UMnet is still vulnerable to network security threats which could impact its availability to the U-M community.
As a response to this challenge, ITS completed a proof-of-concept for a custom network border security system that scales with the university’s network capacity needs and has deployed the full-scale, production solution. Based on a model pioneered by Lawrence Berkeley National Labs and adopted by Indiana, Ohio State, and Purdue, this solution includes a mix of vendor components and open-source software (Zeek). This solution uses various methods to classify network traffic as either a threat or not and then modifies access control lists on an inline device to stop the threat. When large research flows are detected, it passes the data through without further inspection, thus allowing the cluster to keep up with the flow of data.
|Speakers:||Daniel Eklund, University of Michigan|
|Event title:||Characterizing the Networking In and Out of the Public Cloud|
|Date, Time:||Sept. 2, 1-2 p.m. ET|
|Description:||The funding agencies are taking an increasingly positive stance toward public clouds. NSF alone has funded several Cloud-focused proposals, including E-CAS, CloudBank, and the multi-cloud IceCube simulation pilot runs. There is increased awareness and recognition that public Cloud providers do provide capabilities not found elsewhere, with elasticity being a major driver.|
The value of elastic scaling is however tightly coupled to the capabilities of the networks that connect all involved resources, both in the public Clouds and at the various research institutions. We have thus set in place a benchmarking testbed that has been collecting data about network performance between both nodes inside the major public Clouds, and between nodes at research institutions and Cloud nodes. Many of the tested on-prem nodes rely on links operated by Internet2.
In this presentation, we provide both the observed peak performance on the various links, as well as a summary of how network performance changes over time. We also provide a summary overview of the costs associated with utilizing networking in the public Clouds.
|Speakers:||Igor Sfiligoi, Lead Scientific Software Developer and Researcher at UCSD-SDSC|
|Event title:||Running a 380PFLOP32s GPU burst for Multi-Messenger Astrophysics with IceCube|
|Date, Time:||Oct. 15, 1:30 – 2:30 p.m. ET|
|Description:||The IceCube Neutrino Observatory is the National Science Foundations (NSF)’s premier facility to detect neutrinos with energies above approximately 10 GeV and a pillar for NSF’s Multi-Messenger Astrophysics (MMA) program, one of NSF’s 10 Big Ideas. The detector is located at the geographic South Pole and is designed to detect interactions of neutrinos of astrophysical origin by instrumenting over a gigaton of polar ice with 5160 optical sensors. The sensors are buried between 1450 and 2450 meters below the surface of the South Pole ice sheet.|
To understand the impact of ice properties on the incoming neutrino detection, and origin, photon propagation simulations on GPUs are used. We executed a series of runs where we aggregated O(100 PFLOPS) worth of GPUs across multiple public Cloud providers and used them to run the IceCube simulations to produce the much needed calibration data. One such run harvested all available for sale GPUs across Amazon Web Services, Microsoft Azure, and Google Cloud Platform the weekend before SC19, reaching over 51k GPUs total and 380 PFLOP32s, with GPU types spanning the full range of generations from the NVIDIA GRID K520 to the most modern NVIDIA T4 and V100. Another run, while smaller in scale, sustained the peak performance for much longer and used just-in-time fetching of input data. We report on the tools and effort needed to create and operate such systems, as well as the science motivation to do so.
|Speakers:||Igor Sfiligoi, Lead Scientific Software Developer and Researcher at UCSD-SDSC|
Frank Würthwein, Director of OSG, HTC Lead at the SDSC and Professor of physics at UCSD
Calls for Proposals
We’re counting on your innovative contributions to make the Community Voices Series a valuable resource to the community. Submit a proposal
Proposals for talks (50-min), panels (50-75 min), and lightning talks (5- to 10-min) are being solicited in (8) areas of interest:
- Advanced Networking
- Arts & Humantities
- Cloud Implementations
- Global Interactions
- InCommon Trusted Access
- Information Security
- Policy & Administration
- Research Engagement & Support
This call is a “rolling” process – as proposals are received, they are reviewed by a Program Committee comprised of distinguished representatives from the research and education community. Those selected are scheduled (based on submitter availability) sometime between now and the end of the year.
|Event Title:||The UCLA Health Sciences HIPAA-Compliant Cloud Journey for Academic Research|
|Date, Time:||July 30, 1:30-2:30 p.m. ET|
|Recording:||Coming soon! | View the presentation|
|Description:||Cloud computing has become a competitive advantage to all organizations, and academic research is no exception to that. The challenge often encountered by academic research institutions in creating a cloud offering is doing so with constrained IT resources (# of staff and cloud expertise) and making it accessible to a research/scientific community that does not have strong IT technical expertise. Compound this with the need to secure sensitive information within compliance frameworks (i.e., HIPAA) and that gives a glimpse into what our journey at UCLA Health Sciences IT has been. We will share our experience in this journey; successes, failures, and future goals.|
|Event Title:||Science Gateways: Sustaining Integrated Solutions for Accelerating Science|
|Date, Time:||July 1 from 1-2 p.m. ET|
|Recording:||Watch video (opens in new window)|
|Description:||The computational landscape has never evolved as fast as in the last 10 years: novel research and data infrastructures, as well as hardware architectures and lab instruments, allow answering research questions that could have been not even asked 10 years ago.|
Our panel discussed experiences for mature, internationally used science gateway frameworks such as Apache Airavata, HUBzero, and the Globus Data Platform as well as good practices for sustainability measures.
|Event Title:||Open Science Operational Cybersecurity in Action: ResearchSOC Deployment at NSF Facilities|
|Date, Time:||June 17, 2021, 1-2 p.m. ET|
|Recording:||View Recording (opens in new window) | Download the presentation|
|Description:||This presentation from the ResearchSOC educated the community on the initial observations from this NSF-sponsored cybersecurity operations center from their onboarding of three NSF major facilities, as well describing common challenges experienced across the higher education landscape as seen by ResearchSOC’s core partner, the OmniSOC.|
|Event Title:||The Important Role for R&E in Internet Governance and Technology Policy|
|Date, Time:||June 3, 2021, 1-2 p.m. ET|
|Recording||View Recording (opens in new window)|
|Description:||A dynamic group of speakers covered topics on broadband policy, advocacy, cybersecurity and identity management, and tips on how to talk to policymakers. They provided examples of past successes of the collective power of R&E in policy debates, talked about the difference between lobbying and advocacy and what are we allowed to do, and suggested high-profile topics the R&E community should consider addressing first.|
|Event Title:||Rutgers University Federated Research Computing and Data Ecosystem|
|Date, time:||Thursday, May 6, 1-2 p.m. ET|
|Registration:||View the recording (opens in new window) | Download the presentation|
|Description:||The Office of Advanced Research Computing (OARC), a university-wide, centrally supported organization that was created in 2016 to provide strategic leadership, expertise, and support to further Rutgers University’s research and scholarly achievements on all campuses through next generation computing, data science, advanced networking, public and private clouds, and creative learning environments. OARC is uniquely situated between research and campus IT, reporting directly to both the Senior Vice President from the Office for Research and the Senior Vice President and Chief Information Officer from the Office of Information Technology (OIT).|
During this presentation Dr. von Oehsen gave an overview of the Rutgers University Federated Research Computing and Data Ecosystem.
|Speaker:||Barr von Oehsen|