View detailed abstracts for featured 2022 Internet2 Technology Exchange sessions. You can sort by track to find the abstracts.
2022 Internet2 Technology Exchange | Dec. 5-9, Denver, Colo.
View Abstracts by Track
George Loftus, acting vice president of Network Services, will kick off the Advanced Networking track and give you an overview of what is happening in Internet2’s Network Services group and introduce you to the Network Services team!
Chris Wilkinson, Internet2 director of Network Architecture and Planning, will talk about the NGI build of optical and packet infrastructure, what the network group is currently doing and what lies on the horizon to continue to serve the needs of the community.
Learn about deploying IPv6, RPKI and DNSSEC, keeping your ARIN data accurate, and navigating the IPv4 Transfer Market.
Like most research and education networks (REN), AmLight has diverse science drivers and workflows. While some science drivers operate on best-effort, some require steady jitter and delay, and some have strict SLA for packet loss and service outages. For international long-haul highly-distributed RENs such as AmLight, the critical component to enable the quality of service users require/demand is an efficient network monitoring framework. Monitoring the network state and events must be done accurately and as close to real-time as possible. Legacy technologies and approaches, such as SNMP and flow sampling, have always been the main approaches used by network operators to monitor the network state. These technologies have their limitations and they are well-known, for instance, the lack of real-time support.
In 2014, when AmLight became an SDN infrastructure with OpenFlow, OpenFlow counters were added to the existing SNMP/sFlow monitoring combo. However, OpenFlow counters suffered the same inconsistencies as SNMP: each vendor had its level of accuracy and minimum polling interval. Multiple vendors were evaluated regarding the quality of the counters provided via OpenFlow, and results were presented at the 2016 Traffic Monitoring and Analysis (TMA) workshop and the 2016 Internet2 Technology Exchange. In the end, AmLight learned that SNMP + sFlow + OpenFlow counters were not sufficient to achieve its monitoring goals.
Since 2018, AmLight has focused on monitoring the network by leveraging advancements in the field of programmable data planes, more specifically, In-band Network Telemetry (INT). As a solution for leveraging INT was not commercially available, AmLight built its telemetry exporting and gathering framework in partnership with NoviFlow and Barefoot Networks. The development of a telemetry framework was the scope of work of the NSF AmLight-INT project. The academic community praised the outcomes of the AmLight-INT project, and the solution developed was presented multiple times since 2018, including at the 2019 Internet2 Technology Exchange. In production at AmLight, INT gives real-time per-packet visibility with zero impact on network devices’ network functions, since all telemetry data is gathered and exported directly from the switch’s ASIC.
Although INT provides visibility of 100% of the packets flowing through programmable switches, its goals are focused on real-time granular network state monitoring. Leveraging INT for long-term reports and top talkers can lead to an expensive monitoring environment due to the amount of data generated. Moreover, AmLight is not entirely composed of programmable switches, as it leverages legacy Top of Rack (ToR) switches and Juniper MX routers. As a result, monitoring legacy devices is still mandatory for AmLight. To understand how close to real-time its legacy devices can accurately perform, AmLight pushed the limits of its legacy monitoring tools. Juniper expanded its MX family with a network streaming telemetry solution called Juniper Telemetry Interface (JTI), which provides telemetry summaries every two seconds. JTI has been in use at AmLight since 2021. JTI, for instance, aims to replace SNMP with a closer-to-real-time solution to gather interface counters.
In 2022, AmLight performed multiple simulations and field evaluations to compare how each monitoring technology reported specific events, from microbursts to DDoS attacks. Also, AmLight experienced various events (microbursts and DDoS attacks) that helped shape the solution to its needs. AmLight compared INT, SNMP, sFlow, and JTI by creating paths that included all sorts of technologies, and used highly precise traffic generators to create network events. Results, methodology and lessons learned will be presented to the audience, as well as how AmLight combined its monitoring technologies to guarantee science drivers’ expectations are met.
I2 Insight Console is a single place to manage all of your Internet2 network services. Take I2 Insight Console for a test drive and help us spot design errors. You won’t want to miss this opportunity to get an early look at the new visual design and provide input or feedback, especially if you are a network engineer or cloud architect.
Schedule a test drive (optional) at https://usability.ns.internet2.edu or drop by to watch a test in progress.
Tuesday December 6
9 – 9:30 a.m.
10:30 – 11:45 a.m.
1:30 – 4:45 p.m.
Wednesday December 7
8 – 9:30 a.m.
11:15 – 11:45 a.m.
1:30 – 4:45 pm
It is well known that loss-based TCP congestion control algorithms are problematic for high-speed high-latency flows that are common in Big Science. In 2016 Google released a new congestion control algorithm called ’BBR’ (Bottleneck Bandwidth and Round-trip time) that uses a model-based approach, and the design has since been refined in an alpha release of BBRv2. In this talk, we describe and perform a set of experiments that assess the suitability of BBRv2 for use on Data Transfer Nodes (DTNs). The experiments were run on both production R&E networks and within a controlled testbed environment. Our analysis of the results show that BBRv2 improves upon BBRv1 for common Big Science transfer scenarios and is a promising option in high-speed short-queue networking environments. We will also present the results of running BBRv2 on 100G DTNs.
So, you want to do DevOps? You’ve probably heard these terms and might be wondering what they’re all about. I’ll provide a brief overview of Git, Gitlab Pipelines, Containers, and a few other tools the Internet2 NS-ISS team uses and provide a demo of some of our workflows.
Speaker: Karl Newell, Internet2
The Women in IT Networking at SC (WINS) program was developed in 2015** as a means of addressing the prevalent gender gap in IT particularly in network engineering and high-performance computing. Each year, over 200 expert volunteers from academia, government, and industry come together to design, construct, and operate SCinet, a unique multi-terabit-per-second network built to support SC attendees’ HPC demos, a hallmark of the SC conference. WINS funds 5-8 qualified female candidates in their early- to mid-career to join the SCinet volunteer workforce. Selected candidates, reviewed by an expert panel, receive full travel support to participate in the construction of SCinet. SCinet takes a year to plan, a month to build, a week to operate, and just one day to tear down.
Comprised of more than 20 teams, SCinet provides an ideal “apprenticeship” for engineers and communications experts looking for direct access to the most cutting-edge network hardware and software. Participants are matched with a team and mentor based on their skill set and area of interest and work side-by-side with these world-leading network, software, and systems engineers.
For this panel discussion, four WINS alumni, who have ascended to team leadership roles within SCinet, will share technical details on the unique areas of the network they have helped to build since their first experience with SCinet via WINS. The speakers will discuss the arc of their skill and management growth vis a vis their SCinet experience. WINS is a joint effort between the Department of Energy’s Energy Sciences Network (ESnet), the Keystone Initiative for Network Based Education and Research (KINBER), and University Corporation for Atmospheric Research (UCAR). Funding is provided through a NSF grant* and through ESnet.
NSF 2016 grant #ACI-1640987
** NSF 2015 grant #ACI-144064
Faced with an increased need for network automation and orchestration, and at the same time limited resources and appropriate depth and level of knowledge, European NRENs are progressing with the development of their digital ecosystems at different pace. In order to provide support and help NRENs on their path towards advanced, self-* networks and digital network support systems, the GÉANT4-3 project’s Network Technologies and Services Development work package has created the Network Automation eAcademy.
Network Automation eAcademy is an umbrella name for several activities that help NRENs progress in their digital transformation journey. It includes training, architecture mapping with a reference digital architecture, maturity level assessment of digital architectures and terminology glossary, as well as other resources and activities to foster the usage of automation, orchestration and virtualisation among the R6E institutions, to help them pave the way for multi-domain collaboration and self-services provisioning. The Network
Automation eAcademy is provided by the community for the community, as all authors’ contributions and examples are coming from the R&E environment, making it unique from any commercial services.
In this talk, we will present:
– Training activities developed to address the need in several areas that include but are not limited to orchestration, automation and virtualisation (OAV), use cases, architectures, DevOps concepts, Open Digital Architecture (ODA) functional blocks, etc, from introductory to advanced level modules. The. training portfolio currently includes more than 20 modules, all available on the GÉANT eAcademy portal https://e-academy.geant.org/moodle/ (available with eduGAIN or social networks authentication).
In addition, we will present other Network eAcademy activities ongoing in the GÉANT4-3 project, as well as future plans in the next generation of the project – GÉANT5-1.
Network Orchestration is becoming a defining factor in next generation networks, offering clear benefits for operations. ESnet has leveraged a combination of internally developed tools and purchased software to Orchestrate and Automate network configuration deployment, creating ease of operations and allowing for rapid deployment. ESnet will provide a detailed analysis of our path towards automation, showcasing successes as well as analyzing pitfalls that we learned along the way.
This talk will give an overview of the whole network transformation underway at the University of Michigan. Details will include building the next generation border security system (part of the NSF-sponsored NetBASILISK project), rebuilding the campus core, rearchitecting building networks, and replacing the WiFi network. The talk will focus on steps to achieve performance, operational efficiency, and automation.
Internet2’s Cloud Connect service has been available since late in 2018. In this talk, Matt Zekauskas of Internet2 gives a quick review of Cloud Connect as it exists today, and then looks at how it’s being used (mostly from a network perspective). Matt will look at deployment patterns (including resiliency) and utilization, and what implications or conclusions can be drawn. Questions or suggestions are welcome, either during the allotted time or after.
This session will provide an overview of Internet2’s Routing Integrity Initiative. This initiative seeks to improve the routing integrity of the Internet2 ecosystem by providing: actionable routing, security reporting, routing security assessments for connectors and universities, educational outreach, and updated modes to manage routing policy expressed to Internet2.
The Science DMZ has been broadly deployed in a wide range of R&E environments, from small sites to major universities, and from individual experiments to national laboratories. While the Science DMZ is still considered best practice for cyberinfrastructure supporting data-intensive science, the world continues to change.
Streaming workflows, the focus on zero trust security architecture, and higher-level services to enhance scientific productivity are all part of the Science DMZ going forward. This talk will cover the evolution and strategic future of the Science DMZ, including the incorporation of new technologies, expanded DTN capabilities, and zero trust security in a Science DMZ context.
Internet2 regularly publishes “route reports” to assist the community with understanding their use of Internet Routing Registries and the consistency of the routes announced to the Internet2 network. This session will review the information contained in these reports and how to best use it.
Creating Resource Public Key Infrastructure Route Origin Authorizations can protect your network from intentional or accidental hijack … and they’re a free service or ARIN. This session will describe how to tell if you can use this ARIN service.
The joint APAN/GNA-G Routing Working Group was formed to address global routing issues impacting the performance of international data flows. Since its creation in June 2021, the group has addressed many different routing issues relevant to the R&E networking community: asymmetrical routing, inefficient global routing (e.g. flows crossing the same ocean twice), R&E flows using commodity routes when an R&E path is available, and changes to links impacting how traffic is routed throughout the world.
The Routing Working Group brings together members of the global REN community, nearly 170 members from over 80 organizations to date, to directly address these complex network routing issues and bring them to a swift resolution. During this BoF we will discuss the current open cases being addressed by the working group, tools for identifying, troubleshooting, and resolving routing problems, provide an overview of MANRS (Mutually Agreed Norms for Routing Security) and of our efforts to create best practices for the global REN community.
A quick overview of the BGPalerter tool for monitoring your routing assets. BGPalerter helps you keep tabs on whether the internet can see your BGP prefixes, when someone might be hijacking, alert you of any RPKI misconfigurations, and can alert you via several mechanisms.
Network monitoring was — since the beginning — an important part of provisioning state-of-the-art services: if you cannot monitor you cannot run a service! Bandwidth, error monitoring is old stuff, then… but what about how traffic flows? The services are more and more real-time ones, and less asynchronous: we video call today, much more than sending email, just to mention network-enabled human interaction.
Monitoring Latency and jitter, including collecting historic data in a consistent way is something still new (or at least unusual), even if many applications depend on having these parameters low and stable. With TimeMap service now being deployed along the whole GEANT backbone production service we can now have a detailed analysis of what’s really happening on the network, check and understand what causes some issues (especially in real-time communication) and go, optimize or fix per network.
We are also starting to use machine learning to spot automatically when something looks wrong, and call for the operator’s attention. All this new information about our network behavior gives a new perspective on network monitoring and shows for the first time also what happens when “something on the network changes”, for example when routing tables are updated. TimeMap is not just for backbones: you can use it in tailored installations also for more local situations. Come, learn and see it in action.
HPN-SSH debuted more than 15 years ago with the goal of providing a high-performance, easy-to-use and maintain data transfer solution. Since that time HPN-SSH has become widely deployed and integrated with other solutions such as gsissh. This talk will focus on advances and features developed since receiving an NSF Elements grant in 2020. We will also be discussing future work and maintaining sustained development.
The Internet runs on the Border Gateway Protocol (BGP). When BGP was released, it was designed to be easy, but it was not very secure. The Internet Routing Registry (IRR) ecosystem allows resource holders to tell other operators where to go and find the correct destination. Over time BGP and the IRR have been updated by adding features to enhance routing security, but human error and malicious activity can still impact your resources and your business.
The Resource Public Key Infrastructure (RPKI) offers another level of protection that allows you to make a cryptographic certifiable statement that confirms you are the rightful source for the resources allocated to you. Come find out how RPKI is the best way to keep your data flowing and your business safe from accidental or intentional actions of others. Find out how to get started using ARIN’s RPKI services and strengthen your routing security. Also hear more about the recent changes to the ARIN LRSA/RSA process that streamlines covering your legacy resources under an LRSA/RSA.
If you could build the next Internet what would it look like? FABRIC is developing an advanced national network infrastructure that will help network, security and systems researchers do just that, and along the way helps make scientific discoveries faster and easier by improving the underlying complex cyberinfrastructure and algorithms. The FABRIC team will update the Internet2 community on the status of FABRIC testbed deployments and connectivity across campuses, labs, I2 and ESnet. We’ll demonstrate our user-facing capabilities, tell you about dangerous experiments you should know about, and talk about the science domains already doing research on FABRIC.
Insight Console is a new web-based tool for troubleshooting and managing your Internet2 network services.In this session you will learn,
- Where to get an early look
- How to run a quick test drive
- How to provide feedback
A lightning talk on one engineer’s perspective on how to pick network automation and SoT platforms. Topics include “NautoBox,” Nornir, Ansible, Netmiko, and unnamed vendors’ “One True Answer” for your automation needs. Disclaimer: There isn’t only one answer.
This is the talk that one engineer wished she heard three years ago: a “What-Have-I-Learned” about starting from ground zero in network automation. From testing to production in a brownfield campus environment with over 80 model types, topics include network automation possibilities at different maturities, the need for spinning up a network “Source of Truth,” the minimum data and effort required to get started, losing sleep trying to figure out the best approach to start, and popular tooling. This is a discussion of lessons learned and how those lessons carried over to Internet2’s latest efforts.
Which solutions are implemented in European NRENs (e.g. in PSNC-Poland, RENATER-France, CESNET-Czech Republic, SWITCH-Switzerland, GÉANT-PanEU)
-Advantages and disadvantages of existing solutions
-Monitoring and management aspects for T&F services
In addition, the work conducted within the GÉANT project Optical Time and Frequency Network (OTFN) team on bridging the gaps and towards a pan-European T&F solution, despite the differences in national implementations, will also be presented. The ongoing work correlates and complements with other pan-European initiatives (such as the CLONETS project), and the technical solutions developed in the OTFN project will enable widespread implementation and scalability of fiber-optic T&F distribution across different NREN Networks.
More information is available here:
This presentation will provide an update and a high-level summary of major changes and progress on the evolution path of the GÉANT network. This presentation will be of interest to network engineers and managers of R&E networks, and the R&E community in general.
The GÉANT network in 2018 had 23 European dark fibre routes, with an average annual lease cost per route of €110k at €0.234 per metre per annum, most of the contracts for these fibres were expiring in 2020/21. The GN4-3N project which started in January 2019 provided an opportunity to reconsider GÉANT’s fibre topology and at the same time plan the renewal of its optical platform. This project has enabled GÉANT to grow a much richer fibre footprint across Europe and increase meshing in the network. The IRUs acquired during this project will be for a 15-year term, thereby providing a long-term infrastructure for GÉANT and a much higher level of sustainability. This presentation will provide overview of the new GÉANT’s fibre footprint across Europe.
GÉANT continues to follow a path towards optical layer disaggregation in transponders and line system. In the period 2018-2019, GÉANT optimised its network design by disaggregating the transponders from the optical layer using Data Centre Interconnect (DCI) technology. In 2020, GÉANT announced Infinera as the winning bidder for the new Open Line System optical platform in the GÉANT network. The new GÉANT dark fibre network is being lit using Infinera Groove G30s and the wavelengths will be switched and amplified using FlexILS ROADMs and amplifiers.
New solutions for further evolution of the network infrastructure continue to be evaluated. To qualify, a new solution must either be capable of reducing costs or provide improved technical capabilities. Through testing these different solutions in the lab and in the field, it is possible to assess whether they can be deployed and used in the network without compromising service levels.
In 2021, GÉANT started to approach the industry regarding IP/MPLS platform availability in 2023. The existing first-generation Juniper MX960/480 estate has reached the maximum capacity of the platform at 1Tbps per slot and will no longer be developed by the vendor. GÉANT has started conversations with all the major IP/MPLS router vendors to determine what their respective catalogues will contain in 2023, the expected date of contract award. Traffic demands continue to be forecast to shape the requirements of any future GÉANT routing platform.
The substantial changes planned in the GÉANT network over the next 3-5 years, including the adoption of an Open Line System and the integration of hardware from new vendors, require a more flexible inventory system. Therefore, a commercial open procurement for a new inventory system was carried out, culminating in a contract award in December 2019, and was successfully deployed and operationalised in Q3 2021, allowing the old inventory system to be retired. A high-level overview of the modernisation of GÉANT network operations tools will also be presented in this session.
This presentation will aim to provide an up-to-date and high-level summary of major changes in the GÉANT network. As the GÉANT network is going through a major transition and significant improvements are being made, this topic will be of interest to the participants from R&E networks and to the R&E users involved in data-intensive science.
In the last decade, research optical networks are facing multiple challenges in the provisioning of nondata, ultra-precise, and stable services.
In the contribution, we present the creation of a sublayer in the optical network, based on the reserved dark spectrum, which is capable of the interconnection of quantum sources of coherent optical frequency, the transmission of very precise timing, and also the transmission of Quantum Key Distribution (QKD) signals.
We also introduce the Czech Infrastructure for Time and Frequency activity (CITAF) which is a non-commercial, open activity focused on the transfer of accurate time and very stable quantum sources of optical frequency using optical networks. We briefly summarize prerequisites needed for long haul transmission of high-precise time with accuracy going down to picoseconds order in terms of TDEV resp. 500 ps in terms of MTIE, and ultra-stable coherent optical frequency with stability in the order 10-18 in terms of ADEV for 1000 s averaging interval for hundreds of kilometers long optical fibers shared with telecommunication data transmission. Another interesting application is vibration sensing in the proximity of the fiber line. It can be easily realized on the lines with coherent optical frequency transmission; however vibration sensing/detection implementation on telecom data links will be shown, too.
We will also discuss the creation of bidirectional dark channels within shared fibers including issues of bidirectional amplification and avoiding unwanted oscillations and interference with data transmissions. In the end, we address short-term upgrades and future development plans regarding wavelength bands and consider geographic extensions including cross-border applications.
Do you read articles about Internet censorship, corporate data collection, or new FCC rules that make you wonder how these decisions are made, and by whom? Technology policy and regulatory decisions at the national level affect our campuses, our profession, and our personal lives. But too often it’s seen as an area in which we as individuals can have little impact. We’ll have a panel discussion on how, why, and where you can get involved, how you can work with others in the R&E community, and how you can make a change for good. You don’t have to have a background in public policy to make a difference!
Members of the tech community (your peers) that either have direct experience with, or knowledge of, various policy-impacting organizations, will provide an overview of different regulatory agencies and (places where tech policy is discussed). Our goal is to educate and energize the subject matter experts to get involved with tech policy.
GlobalNOC and many of the networks we support have recently been focused on deploying systems for automated management of network device configurations — allowing many individual devices to be coordinated and orchestrated as a single system. While this automation provides immediate benefit in terms of service delivery speed, consistency, and efficiency, it also enables automation in other areas of the network operations lifecycle.
In this talk, we will show how we’re building automated systems on top of these automated-configuration networks to improve the way networks are operated. We will focus on the automated tools we’ve been developing to assist network engineers in dealing with outages/incidents on BGP-signaled IP services — including troubleshooting dashboards, guided workflows, and automated remediation tools.
We will also discuss our vision for automated network operations more broadly and how we see the pervasive deployment of automation-first networks enabling new ways to operate our networks more effectively and efficiently.
In 2022, following on the successful use of foundational automation and orchestration tools during the rollout of Internet2’s NGI network, Internet2 Network Services embarked on an ambitious program to produce a unified, integrated, extensible, and ubiquitous set of software services and supporting architecture to enhance members’ ability to explore, monitor, and manage their Internet2-provisioned network services. This presentation will provide an overview of the design and current state of those efforts, and serve as an introduction to a number of other presentations taking place across the remainder of the conference.
Speaker: Mike Simpson, Internet2
Network automation and orchestration have been a major focus in the networking world for several years. Over time, along with terms such as DevOps (with the general image of melding traditional network engineer and programmer), containers, and Kubernetes, these terms replaced an earlier emphasis on things such as OpenFlow and Software Defined Networking (SDN). In 2017, Internet2 held its first Network Automation Tools and Practices Workshop at the Technology Exchange in San Francisco. In the years that followed, Internet2 offered more workshops and sessions around the topic.
Much as SDN before it, however, ask five people what network automation is and you get six different answers. And while Internet2 offers various collaborative means for the community to communicate, from mailing lists to Slack channels, based on how quiet those are, one would think the area of network automation/orchestration has largely been solved. Done and done. Time to move on.
However, there appears to be a disconnect between the silence and the state of network automation across the community. This session hopes to address this. It is intended for anyone interested in network automation, at any level, from entry-level network engineer to executive, from someone who has never done any kind of automation to someone who has fully automated entire networks. This combination of lightning talks/discussion panel offers insights into five distinct journeys into network automation. Each panel member will tell their own story in their own words, followed by an opportunity where attendees are encouraged to ask questions and engage the panel and community around this subject.
In an environment of constantly accelerating technology advancement, research and education network providers must also adapt to remain relevant. This session will delve into the key technologies implemented in the Internet2 NGI network, and how they enable agility and performance. The focus of the session will be on technologies as opposed to products. Areas of technology including optical fiber, flexible grid ROADMs, advanced long-haul modems, pluggable optics, OTN switching, EVPN, Segment Routing, and NetConf-based automation will be discussed. The session will break down each enabling technology and highlight the advantage that it brings to the network.
Features in perfSONAR 5.0 include monitoring tools for a wider array of protocols and capabilities that enhance integration with external orchestration. pSSID is open-source software that uses perfSONAR’s new capabilities to provide distributed active WiFi monitoring with modern low-cost single-board computers such as the Raspberry Pi.
This talk will detail the capabilities and provisioning process of the pSSID WiFi monitoring system, and present a case study of its initial deployment at the University of Michigan.
At TechEx19 the University of Minnesota talked about its then-recent Campus EVPN implementation. This talk will build on that by talking about our recent implementation of routed multicast over EVPN and other new EVPN-related topics.
The University of Minnesota has implemented an automated border blocking facility utilizing BGP blackhole or BGP FlowSpec. BGP blackhole is used to block all traffic typically from an individual IP address on the Internet. BGP FlowSpec is typically used to more surgically block traffic, based on IP Protocol, source, or destination ports, over a broader attack surface.
The Science DMZ has been broadly deployed in a wide range of R&E environments, from small sites to major universities, and from individual experiments to national laboratories. While the Science DMZ is still considered best practice for cyberinfrastructure supporting data-intensive science, the world continues to change. Streaming workflows, the focus on zero trust security architecture, and higher-level services to enhance scientific productivity are all part of the Science DMZ going forward. This talk will cover the evolution and strategic future of the Science DMZ, including the incorporation of new technologies, expanded DTN capabilities, and zero trust security in a Science DMZ context.
This workshop walks through the process of provisioning and configuring a full perfSONAR deployment with Ansible automation. This will cover measurement test points, data archives, dashboards, and schedule publishers. We will discuss component infrastructure dependencies and overall system architecture.
So you have a fancy new orchestration system, now it’s time to migrate to it. In this talk, we’ll cover how we migrated from a manually configured network to one that’s managed by automation and orchestration. Everything from exploring existing configuration, extracting information, generating new configuration, and how we organized people running the migrations to keep some semblance of sanity through it all.
Internet2 intends to protect members’ routing integrity by implementing RPKI route origin validation. This session review Internet2’s plans for RPKI ROV and seeks input from members and connectors.
The meeting has been merged with the NTAC Peering & Routing WG. This long-standing open working group discusses issues related to BGP routing and peering on both the R&E and I2PX routing instances on the Internet2 backbone.
Dan Schmiedt <firstname.lastname@example.org> and
Tony Brock <Anthony.Brock@oregonstate.edu> are the co-chairs for this working group.
In this talk, we’ll tell the story of building the NGI orchestrator all the way from selecting the platform, and discovering and developing service models, to testing and validation.
NGI brought a new route policy language, an orchestration platform to Internet2, and a chance to polish up our eBGP route policy. In this talk we’ll cover best practices we worked into our new route policy, how we structured such a policy to play nice with orchestration, and finally how we try to make common things simple and odd things possible.
The ESnet team has been focused on the ESnet6 project upgrading the ESnet network for several years. This panel will provide a high-level overview of the project, focusing on highlights and lessons learned. The panel will include multiple ESnet engineers who lead or participated in different parts of the implementation. They will review different aspects of the ESnet6 implementation and discuss what went well, and what could have gone better.
ESnet interconnects the DOE’s national laboratory system, dozens of other DOE sites, and ~200 research and commercial networks around the world—enabling tens of thousands of scientists at DOE laboratories and academic institutions across the country to transfer vast data streams and access distributed research resources in real-time. ESnet achieves this by providing high-bandwidth, reliable connections that enable many thousands of the nation’s scientists to collaborate on some of the world’s most important scientific challenges including energy, biosciences, materials, and the origins of the universe. ESnet operates what is essentially the Department’s circulatory system for the movement of large-scale scientific data, providing real-time networking to many thousands of users across the entire DOE complex.
The ESnet6 project upgraded the ESnet infrastructure, providing new networking resources on a dedicated optical fiber infrastructure to deliver a significant increase in networking capability and resiliency in support of the SC research community. ESnet6 is providing targeted security service automation, operational support automation, and a platform for the development and demonstration of innovative new network services.
Together, the Internet2 Network Services API and Internet2 Insight Console provide a unified, integrated, and extensible platform for exploring, monitoring, and managing all of the services delivered by Internet2’s Network Services. This presentation will provide a quick introduction to the API and Insight Console, how they interoperate, and how they integrate with external backend services; it will be followed by a more detailed walk-through of the I2 Insight Console environment.
The Case for “Cloud Native” Network & Security Architecture ABSTRACT: Adopting Cloud PaaS and SaaS services in Higher Ed has traditionally caused campuses to extend their legacy network, security, and other infrastructure from campus to the cloud. This approach slows adoption and causes cloud workloads to be weighed down by legacy systems and procedures. Institutions should aim to adopt cloud IaaS and PaaS technologies natively and avoid making your cloud environment dependent on your campus network and security tools.
I propose starting with an Internet-centric, zero-trust approach with identity as the perimeter when designing new cloud workloads to meet specific business needs. By building cloud “islands” (my term for secure cloud IaaS and PaaS deployments that do NOT require legacy campus networks and security tools) we can simplify the architecture of many workloads while at the same time making them more robust and more secure. In this workshop, I will work together with engineers from AWS, Google, and Microsoft, to guide attendees through a discussion around the wisdom of this approach, where to apply it, and provide examples of how to implement it on the three platforms.
Implementing cloud foundational standards can be a challenge, but what do you do if you have workloads already running in the cloud without any standards at all? How do you manage continually adding new cloud standards as institutional policies and needs evolve? How do you create an infrastructure that is both scalable, supportable, and meets security requirements? University of California Office of the President was presented with the challenge of moving from cloud chaos to cloud standards and will share the automation/tools put in place to align standards for new and existing workloads
With increasing demand for utilization of public clouds in conjunction with the need to work with secure workloads like CUI, how do you go about creating a platform that enables researchers while establishing security controls to meet compliance requirements?
This presentation will go into our experience building a CMMC compliant cloud in Azure Gov Cloud utilized by researchers at CU Boulder. We will cover the design choices we made, our experiences with implementations, and potential items on our roadmap and how we plan to achieve them.
For years institutions have leveraged free or low-cost cloud storage on platforms such as Google, Box, Microsoft, and others to provide their communities with a means to collaborate anytime from anywhere on any device. It transformed the way we all worked.
One by one, the vendors upped the ante by raising and finally eliminating quotas. This not only raised users’ confidence in their ability to rely on the platforms for all their work, but it opened the door to unintended uses. Students backed up their music and video libraries. Sysadmins backed up their servers. Researchers hosted massive data sets.
Storage grew at an alarming rate and the vendors began to push back, raising rates and rolling back unlimited storage, if not in name, then in practice. Schools were forced to take a critical look at their storage profile, and begin to consider data lifecycle and storage policies. Some decided to consolidate on fewer platforms to save cost and exposure. In the process, they discovered the challenges and the cost of moving large amounts of data while still preserving collaborations and the associated comments and metadata. Naturally many vendors offered solutions, but few, if any, handled higher education’s complexity gracefully.
The panelists will detail a variety of typical workflows built up around these platforms. They will share their challenges, lessons learned, and recommendations to both their colleagues and the vendors in hopes of finding real solutions.
Two one-hour blocks are devoted to a selection of the best sessions from the Cloud Forum:
- AWS Account Migration at the University of Chicago – Shelley Rossell, University of Chicago
In the ongoing quest to be able to view and manage AWS accounts consistently and more easily, the University of Chicago needed to migrate accounts from a legacy Organization into their own, then ultimately from an unmanaged OU to a control-tower managed OU. This presentation describes the purpose, the process, and the results of each migration.
- So, You Want to Move to the Cloud. What Could Go Wrong? – Bob Flynn, Internet2
The cloud service providers are eager to help your institution, your researchers, and your students make the most of the cloud. Your infrastructure will be agile and efficient. Your researchers will reach higher heights. Your students will all graduate with cloud jobs. The vision is so clear. It all seems so easy. What could go wrong?
- Let a Thousand PaaSes Bloom – Matthew Rich, Northwestern University
The hyperscalers take up much of our thoughts, but a new generation of PaaS vendors is capturing the hearts of your developers. This short talk will describe a handful of modern PaaSes you should be aware of.
- Provisioning GovCloud Accounts – Shelley Rossell, University of Chicago
GovCloud account set up differs from that of commercial AWS accounts. This is a summary of key lessons learned and our current GovCloud account provisioning process
- AWS Account Migration at the University of Chicago – Shelley Rossell, University of Chicago
A Cloud Center of Excellence (CCoE) enables your organization to put the right people in the right roles to move your culture forward. Hear how to build a successful CCoE from someone who learned the hard way.
Speaker: Chris Kuehn (AWS)
WashU leverages Veeam and AWS for cloud-hosted, extra-regional backups. This talk will provide an overview of how (and why) we are modifying our hybrid-cloud architecture to move away from AWS S3 and adopt a competing S3-compatable cloud storage provider for our cloud-hosted backup storage.
Speaker: John Bailey, Washington University in St. Louis
A primary pain point for any organization in using the public cloud is cost, and managing/understanding of that cost. Minimizing cost is often essential to success, but can impact ease of management and security if not done thoughtfully. Depending on contract limitations, account settings or other variables, cloud architects and admins have to determine if the benefits of certain services outweigh the costs. Balancing the need for cost containment and security through tooling and intra-organizational communication is critical, and involves close collaboration with a number of stakeholders, including networking and security groups, in addition to the cloud team.
In this talk, we plan to walk through some of the decision points and tooling for reducing cost and when not to do so. Networking complication and reduction through cost added means can also be organizationally positive, even in spite of added charges. Architecting from the beginning with ideas around multiple accounts, security segmented virtual networking zones and reducing risk of sudden costs is optimal, but much can be done even for already in-use environments.
This session will provide an overview of the capability with the current cloud providers bring your own IP addresses to the Cloud. We’ll cover how the providers announce your IP network, and the role of routing security.
Northwestern University Library’s Repository & Digital Curation workgroup has been developing and running cloud-hosted applications since 2016. Our initial approach was a simple “lift-and-shift” operation, moving our premises-hosted services to virtual machines hosted in Amazon EC2. We quickly migrated to some cloud-native utility services like our relational database and search index. More recently, we have started to take advantage of the scalability offered by cloud-native services such as serverless functions (AWS Lambda), serverless container infrastructure (AWS Fargate), and short-lived, large-scale batch operations.
Now we are taking the next step, translating our bespoke data processing pipeline into a series of state machines using AWS Step Functions and other cloud technologies. At each step, we have had to reevaluate our local development environment, either emulating or mocking the AWS services our applications depend on. To overcome the limitations of this approach, we have created a cloud-based development environment based on a heavily customized AWS Cloud9 environment. This has allowed each developer on our team to access the full range of AWS offerings, generating quick prototypes and iterating quickly on solutions, without the overhead or uncertainty of trying to emulate the entire stack on a laptop or local workstation, but without sacrificing the development tools we’re accustomed to. It has also led to a development platform that can be stood up and torn down quickly for easy onboarding as well as a quick “reset” of one or many existing setups. This presentation will explain our approach, how we got here, and how it’s going, as well as the surprises and challenges we have encountered along the way. Though we are just getting started, by December, we should have some good analysis of the costs of this approach to share as well.
We believe that this topic fits nicely into the Cloud Technology track, particularly some of the points suggested by Theme #1: “Cloud as a Team Sport.”
As public cloud usage grows on campus it creates pressure to build and staff cloud enablement services. Building a justification for growing (or starting) a cloud enablement team means measuring impact. Easier metrics like total $ spent on public cloud or the total number of users in the cloud platform quickly come up, but rarely justify much additional spending.
This panel will discuss different schools’ approaches to measuring the impact of public cloud on campus with an eye toward the return on investment presented by cloud enablement services.
In November 2021, Internet2 migrated the national eduroam proxies to run in the cloud. Over the next months, this system grew in resilience and features. How were the two systems run concurrently during the migration? What problems were encountered? How do logs get to the log viewer, and how does it guarantee that nobody outside of your organization can see your logs? Learn how the system was deployed and expanded over the last year.
Indiana University like many institutions runs a variety of tools for managing the applications we develop and implement. This session is about the effort to align that tooling into a consistent platform for providing a robust developer experience for maintaining our applications. This platform serves as the underpinnings for IU’s opportunistic cloud approach where we seek to make the best use of our on-premise and cloud infrastructure to deliver the best solutions for our development teams and constituents. During this session, you will hear about the platform we are building at Indiana University and how we are leveraging it to manage our applications, cloud accounts, and more to make our strategy for use of a hybrid cloud a reality.
The Globus service is moving beyond research data management and applying the Software-as-a-Service model to research computing. Over the past two years, the Globus Labs team has developed funcX, a service that facilitates function execution across a federated ecosystem of compute endpoints spanning personal laptops through supercomputers. Having proven the model in select research projects, we would like to engage with the community as we work towards a generally available product.
We will provide an overview of the funcX service, describe early implementations, and review planned features to ensure the product effectively addresses critical use cases for the research community. We would also like to solicit input from attendees to shape future releases of the service.
This session will report on the development of Zero Trust Federations as part of the IoT Security at the Edge project, funded by DHS and the State of Virginia.
The Zero Trust Architecture (ZTA) concept has been developed to enhance overall system security by implementing security boundaries around all important enterprise resources, large or small, thereby creating zones of implicit trust. The ZTA approach is a direct parallel with the approach taken in the Cloud Federation Reference Architecture (CFRA) in NIST SP 800-332 and the IEEE 2302-2021 Standard on Intercloud Interoperability and Federation. The NIST CFRA takes a distributed API Gateway approach whereby per-service security boundaries are defined.
These APIs are codified in IEEE 2302-2021. In NIST SP 800-27, Zero Trust Architecture, the ZTA approach is defined by seven design tenets, including how implicit trust zones are identified and how communication among them must be managed. The NIST CFRA and IEEE 2302-2021 satisfy all of these design tenets. Hence, the notion of Zero Trust Federations is straightforward.
Join a team of 4 attendees, one of 7 such teams, for a fast-paced role-playing security table top exercise. No prerequisite experience is needed. Led by Romain Wartel, a leading security practitioner from CERN.
Please register, before 16:00 Wednesday Dec 7th, by contacting Romain.Wartel@cern.ch. Please include a brief description of your areas of interest and skill set (e.g. “team manager”, “Linux programmer”, etc.). Slots will be allocated on a first come first served basis.
Identity and Access Management
InCommon’s new Support Org model has allowed state-wide educational networks to deploy the global eduroam wireless service to K-12 schools, libraries, and museums. This offers significant opportunities to increase access, but can come with challenges as K-12 schools operate at different scales and use different toolsets.
Come see how Nebraska has engaged with K-12 schools across their state to deploy eduroam, specific usage examples of connecting Google identities to the eduroam federation, and how you can get involved within your state/regional network.
Speakers: Brett Bieber (University of Nebraska); Mark Donnelly (Painless Security)
Such a time we’ve all had, these last two plus years. Lots of change, some good and promising, some rough and vexing. What’s been going on in our IAM world? Nicole and Ann will take an entertaining spin through the big topics of the day and set the stage for the rest of the week.
BOF with Nick France from Sectigo
Certificates have been around for quite a long while and we all have to deal with them in one way or another. In recent years, the CA/Browser Forum has tightened rules of issuance, like maximum certificate validity, which only increases the workload of already-overburdened IT staff.
The InCommon Certificate Service, powered by Sectigo, helps our community to manage the various challenges that come with managing large numbers of certificates. Join us for this BOF to hear Nick France, CTO of SSL at Sectigo, discuss various strategies for managing SSL certificates at scale using the features of the InCommon Certificate Service and get your questions answered about how you can implement this at your organization.
The development team lead, Phil Smart, will provide an update on the state of the Consortium and the project, with a summary of 2022 activity and forthcoming plans. There should be time for Q/A from the community.
During the pandemic, assurance requirements have continued to evolve in both international standards and NIH service provider requirements for increased assurance.
This session will cover two aspects: NIH Assurance – It’s possible!: In this part of the session, the NIAID international team will present on how it met NIH’s increasing assurance expectations. The team established assurance from the initial account issuance (implementing the REFEDS Assurance Framework (RAF)) and preserved the assurance during the account’s life (implementing the REFEDS Multi-Factor Authentication Profile), even when that account reaches out to Service Providers across the federation (implementing federation assurance).
As NIH’s assurance requirements are evolving, so too are international federation standards, leading to the second part of this presentation: Evolution of the REFEDS Assurance Framework. In this part of the session we will inform attendees of the latest work on the international standard RAF. Join us to share your ideas and experiences in the field of identity assurance and get prepared for the upcoming community consultation.
Speakers: Matthew Economou, Kyle Lewis, Jule Ziegler
This would be a joint session where each of the Catalysts will share a bit about who they are and the value they provide to Higher Education and the open source community. There will be presentations covering a variety of topics, from consulting services to building connectors, managed solutions, and new UI applications. You’ll hear about ITAP services and support, including Federation, hosting, and more.
Come hear what we have to offer. We’ll collaborate and work together to help meet your needs: The InCommon Catalyst and the Community come together!
The higher education information security community, InCommon TAC, and the HECVAT working group collaborated to update the HECVAT for a 3.0 release! Our session this year will go over the major HECVAT 3.0 update including the IAM updates, future plans, and ask for your input on where to develop HECVAT in the future. We want to hear your feedback on the IAM updates! We’ll also go over where we need you to get involved to build more resources for the community and the service providers that support us.
The InCommon Sirtfi Exercise Planning Working Group (SEPWG) will present on InCommon’s first distributed cybersecurity tabletop exercise, where participating organizations will have practiced their Sirtfi procedures and enhanced federation/community-mindedness when facing cybersecurity incident handling involving Identity and Access Management. The presentation will include the exercise concept, the exercise scenario, and shared lessons learned. The presentation will also propose potential goals for next year and seek to garner interest in volunteers for a 2023 SEPWG.
After years of sitting on the federated identity sidelines, the library community has now stirred and is at the door, demanding to be part of the action. Massive thefts of online content have led publishers to abandon traditional IP address protections for more flexible and powerful federated alternatives. The Seamless Access efforts have addressed barriers to users discovering their IdP. New end-entity tags for SAML metadata define anonymous and pseudonymous authentication standards. The FIM4L library efforts have promoted a number of critical activities, including engagement with libraries and publishers and a recent consultation on implementation guidelines for privacy and the new end-entity tags. At NSF new efforts at the FAIR data management principles and open science are turning up compelling use cases for sophisticated access controls. This session will discuss what stirs the librarian and how can we serve them.
Academia has long been concerned with the privacy and security of our community. Big tech is under pressure to do the same, but their focus is on the consumer web. What happens when the web is redesigned to optimize for consumer-type transactions that do not align with online academic services? In this session, we’ll discuss the changes that the major browser vendors are making to their platforms to prevent unsanctioned tracking of people, and how those changes are impacting authentication and authorization on the web.
With more and more countries and regions developing privacy regulations to protect their citizens online, big tech, in particular the browser vendors, are finding their decades-long ‘hands-off’ approach to the material that goes through their platforms no longer acceptable to people and governments. As a result, they are becoming an active party in the web experience, particularly where a tool or features of the web might be used to track individuals. Unfortunately, the changes being discussed and implemented will not affect only trackers; they also have implications for SSO and federated authentication.
Cookies are those bits of information stored in a browser that contains information about the individual or their session. Link decoration, that extra something on a URL that includes information about the session or where the individual found the link. Redirects, are something an individual might not even see that takes them so quickly to another site such that it appears they’ve visited that site and it is “safe.” All these things are used by trackers to follow and target individuals across the web. They are also used by the protocols used in federated authentication, including SAML and OpenID Connect. As the browser vendors seek to control or even remove the functionality of cookies, link decoration, redirects, and other technologies that may be used to track an individual, they are introducing complexity to the authentication and authorization process for online content.
These changes have already started. On 3 September 2019, Mozilla released Enhanced Tracking Protection in Firefox, which blocked third-party cookies from known tracking services. On 24 March 2020, Apple released the Safari Intelligent Tracking Protection feature that disabled the use of third-party cookies by default. Google has announced its plans to discontinue support for third-party cookies in mid-2023. Several groups within the World Wide Web Consortium (W3C) are looking to change in link decoration and redirects. In all cases, the browser vendors are planning on becoming active participants in authentication flows by verifying and recording the individual’s consent to the authentication process.
The academic community needs to be prepared for the changes that are coming our way. With decades of software built around our community’s understanding of privacy and consent, changes may impact everything from the protocols to the user experience. Come to this session to learn the latest of what’s happening … and what’s going to happen … with web-based authentication.
Discussion by a panel of higher education peers and implement+E41:E55ation partners who have successfully deployed midPoint. Real-world higher education experience is shared with you about standing up midPoint and about employing best practices, such as resource connections, identity propagation and replacing high-maintenance legacy scripts, and policy-compliant entitlement provisioning. Moderated by Exclamation Labs.
The ongoing challenges of attribute release: How we Got here and where can we go now?
From the onset, federated identity was all about the passing of attributes, versus the alternative technologies such as PKI and biometrics that emphasized authentication. However, the friction that the real world has created around facile attribute movement is perhaps the most disappointing characteristic of what has come to pass. This session will begin by identifying the original mechanisms (e.g. required/optional distinctions, user consent, shared semantics) planned to facilitate attribute release and how they came apart in deployment. It will then examine these and other options available going forward, as well as how to attribute release must now adapt to new elements of the federated ecosystem such as proxies and portals.
While undertaking the deployment of a new IGA (Identitiy Governance & Adminstration) system, the Identity and Access Management team at Carnegie Mellon University experimented with a new-to-the-team development and project methodology. In this talk, we will tell you about our IGA journey, what went right, what could have been improved, the pragmatic approach to Agile deployment that proved successful for the team, and how this success led to a fundamental shift in how the team operates.
Grouper is finally in production at Michigan! Features in v2.6 have now made Grouper right for U-M, so we would like to share what we have been learning about the latest cutting edge Grouper technology. The new provisioning framework allows us to efficiently manage large (600K members) access-control groups in 2 ldap systems. To provide better end-user experiences, we use the new incremental loader to update users’ institutional data, and incremental provisioners to update targets, in near real time. ABAC (Attribute Based Access Control) is more efficient than composite groups for diverse units — across 3 campuses, 19 schools and colleges, and a health system — to define the exact cohorts they need without reference/basis group explosion.
We would welcome an opportunity to share a few of our challenges, lessons learned, and what we wish we had known upstream before embarking on the journey to provide Grouper as a decentralized service. We also hope to spawn discussions with other institutions who may benefit from hearing our story and sharing their own struggles, solutions, and vision for the future. Collaboration has been a key to success. Partnering with other institutions for a more collaborative Grouper expedition would be fun and informative.
Cirrus Identity, Evolveum and Exclamation Labs are joining forces to deliver a live demonstration of using midPoint and Cirrus authentication proxy to manage licenses on target systems like Zoom. The main idea for the demo is to manage a list of users who are authorized to have a license, but only users who will actually use the service will consume the license.
That would be achieved by a smart authentication service, which denies authentication to users who are not authorized. The license will be assigned using just-in-time provisioning principles for all authenticated users. At the same time, a connected identity management service will monitor all assigned licenses to unassign licenses using de-provisioning mechanisms for users who will lose the right to have a license.
This demo’s unique value lies in highlighting the potential for new features that can be delivered only with a combination of access management and identity management integrated with the target system.
Identity management is no longer only about transferring data between source and target systems and maintaining consistency. With the increasing number of managed systems and entitlements, there is pressure to distribute the management right within IdM to people who are not identity management professionals.
This talk will introduce basic processes from the identity governance and administration (IGA) area implemented in midPoint that enables delegation of management duties to selected users and streamline the accompanying workflows. The first process is role engineering, allowing decision-makers to manage structured business roles. The following processes are access request and access certification used for users to request roles, entitlements, and other types of access either for themselves or their peers. Access certification guarantees the manually assigned permissions will be periodically reviewed, preventing them from getting out of date. The presentation will also focus on user interface customization that helps inexperienced users orient themself in the web GUI and effortlessly fulfill their tasks.
In summary, the session should inspire the audience to think about IdM in a broader context that enables the distribution of management rights to responsible persons and reduces IdM professionals’ load.
Accelerated digital transformation, combined with identity-based attacks, has increased the demand for multi-factor authentication (MFA), which emerges as a solution to increase the robustness of the authentication process. Published in June/2017, the REFEDS Multi-Factor Authentication (MFA) Profile pavemented a standard way to express and consume MFA events in SAML assertions. Although Shibboleth Identity Provider version 3.3 provides excellent support allowing anyone to implement MFA using MFAuthFlows, it does not offer a standard and easy way to install new functionality into the IdP. Shibboleth IdP v4.1+ offers new mechanisms that help develop and deploy new functionalities, however, it does not include a complete MFA solution
RNP is a Brazilian NREN who created and maintains the Brazilian Academic Federation (CAFe), offering specialized support to all institutions which compose the federation. Although each institution in the federation is autonomous, the development and evolution of IdP used by these institutions are under RNP responsibility. In this talk, we will present our experience and the choices that we have made to offer a complete MFA solution to all Shibboleth Identity Providers in the Brazilian Academic Federation (CAFe), currently composed of more than 300 institutions.
The proposed solution was designed to be flexible and extensible, allowing new technologies to be used as extra authentication factors. For that, we used MultiFactorAuthConfiguration, Modules, and Plugins mechanisms from Shibboleth IdP v4.1+. The modular solution makes it easy to develop and deploy new authentication technologies with lower friction. Currently, the solution offers two IdP modules: common MFA – which includes views and flows for orchestration and a backup code module to act as a second factor; TOTP – to act as a second factor using one-time-passwords. We also are developing a WebAuthN module but it is not production-ready yet.
The MFA solution also supports second-factor lifecycle management through a web application (a dashboard per IdP). Thus, IdP’s users must be able to enable, disable and configure the second factor in their accounts. Furthermore, IdP operators will be able to remove the second factor associated with any user account in the event of a compromise or at the user’s request.
Starting in August 2022, CAFe members will be able to update their IdPs with the MFA solution. We estimate that by December 2022, most IdPs will have MFA enabled. We are planning to run some experiments to evaluate the usability of the MFA solution in November 2022 and present the analysis of obtained results during the 2022 Technology Exchange event.
Collaboration is becoming more important for research and education in recent years. Campuses are reacting to this by allowing access for external users to campus internal systems. That brings new challenges to the Authentication and Authorization Infrastructure (AAI). One of them is a sign-in process and corresponding user experience.
We are considering a simple setup where campus IdP is also used for authentication to internal services. Users authenticate with username and password to the single sign-on service represented by the IdP.
The apparent solution for external users would be to give them a new username and password, but that is not the best approach for users. A better solution is to enable external users to use their existing identities like the ones from inCommon and eduGAIN federations or even social accounts like Google or Linkedin to sign in. In this case, users don’t have to remember new credentials, which is greatly appreciated by them.
The only missing piece is the user experience. Most campus users have never seen a discovery service, or they do not expect it for internal services. They expect to visit the service, type their username and password, and get access. Adding another step to this workflow is not acceptable.
Two years ago, we described this concept at the “TechExtra: InCommon CAMP” online conference in the form of a lightning talk to gather feedback. Now, we have a functional prototype that we would like to introduce. The prototype combines an IdP sign-in screen with a discovery service, which will enable users to sign in directly with their credentials or choose another IdP. Moreover, the solution was designed to be deployed alongside the existing IdP (e.g. Shibboleth) solution and only extend its capabilities without the need to replace or change the current IdP configuration.
As part of our Zero Trust program, Oregon State University is in the RFP review stage on the way to purchasing a commercial IGA (Identity Governance and Access) system and hiring an implementation partner.
We plan to:
-Replace our Shibboleth IDP with Azure SSO and deploy the Cirrus Bridge
-Redesign our account lifecycles to better reflect real-world scenarios
-Design Enterprise Roles for automated access
-Implement a workflow-based access request process for exceptional access
-Create periodic access recertification campaigns
We expect to be in the middle of this transformation in December 2022 (TechEx time). We can discuss various aspects, such as our reasons for choosing commercial software over open source (TAP), how we will continue to participate in the InCommon Federation, and applying Zero Trust principles in higher education.
We have not spoken to other universities about joining this proposal, but we are aware that the University of Chicago (David Langenberg) is making a similar change by bringing in Okta. There may be opportunities to collaborate with others on this proposal. Also, our presentation could be split into two different topics (Azure SSO vs deploying a commercial IGA) if that helps.
This proposal fits in the following suggested categories:
-Theme: Hybrid IAM Environments – Using commercial and InCommon Trusted Access Platform components to support the academic mission
-Theme: IAM’s Role in Managing Cyber Security Risk – Exploring Zero Trust and IAM
Have services you want to offer schools in your state or region, but it’s tough to manage the individual access details? Want to help your member campuses access those shared services and the world of academic collaboration? The community has news for you!
Learn about a recent OARnet’s experience and how the community has been working on several initiatives to help a diversity of organizations to participate. There are also education programs and partner-provided tools to help bridge the gap. Please join the panel and explore resources available to support your needs.
Moderators: Albert Wu and Ann West
Speakers: Jim Basney, Charise Arrowood, Dedra Chamberlin and Mark Fullmer
Advance CAMP (ACAMP) attendees again gather to set the morning’s unconference agenda, with 40 break-out slots up for grabs. Any attendee is welcome to propose a topic.. Bring your ideas and your proposals! For more information and the attendee-designed agenda, see incommon.org/acamp2022.
Take advantage of Grouper to centralize authorization for your web applications. We explore how we can use Grouper to manage access to items in the Shib IdP UI.
NRENs offer services related to Identity and Access Management (IAM), including R&E National Federation, Eduroam, and other federated services. However, services under operation cannot be used for exploratory research, which could compromise their security in production. We noted that researchers in RD&I projects devoted a reasonable amount of time setting up identity management infrastructure to run their experiments and, thus, discard their exploratory environments when research is done. Setting up a proper infrastructure for experimentations in IAM may be more arduous and time-consuming than the research investigation itself. Furthermore, developing applied research in the IAM area requires a scalable, distributed, controlled, and reliable environment to enable exploratory study.
This presentation introduces GIdLab, a service maintained by the RNP to run experiments related to IAM technologies. GIdLab offers specialized RD&I consultancy in IAM with a tailor-made experimentation lab that provides a set of authorization and authentication infrastructures (AAI) and an eduroam environment ready to be used by researchers and software developers. This service was born as an initiative from the IdM technical committee from RNP. Since 2013, more than 70 Brazilian RD&I projects have been using the GIdLab infrastructure and consulting.
GIdLab offers researchers an infrastructure composed of: (1) an entire Shibboleth federation, known as CAFe Expresso; (2) a repository with a set of virtual machines and Docker containers to build Shibboleth providers; (3) a SAML Federation that uses SimpleSAMLPHP framework; (4) an OpenID Connect environment that uses MITREId Connect and Keyclock; (5) an Eduroam testbed; and (6) a first-level customer service via RNP Service Desk and specialized technical service to assist the researchers and developers. The GIdLab infrastructure is open for experimentation and implementation of new technologies and solutions under investigation.
CAFe Expresso, designed along the lines of CAFe (Brazilian R&E federation), provides standard Shibboleth IdPs with LDAP service populated with fictitious users with different attributes. It also offers a dynamic discovery service, a metadata aggregation service, and service providers hosting PHP, Java, and Python demo applications. A SATOSA proxy for translation between authentication and authorization protocols and a COmanage platform are also available for experiments with collaborative organizations. CAFe Expresso interoperates with the SimpleSAMLPHP federation, and it is integrated via proxy with OIDC providers.
The Eduroam testbed is the first worldwide initiative to offer an authentication and authorization infrastructure for experimentation based on RADIUS, the IEEE 802.1X protocol, and the Eduroam service. Since May 2018, the Eduroam testbed has been provided by the GIdLab infrastructure in order to provide an environment for researchers and professionals experimenting with technologies/solutions in wireless networks. The configuration of the Eduroam testbed has three levels of RADIUS servers: local, national (federation), and top-level confederation.
In the last part of the presentation, we are going to describe an e-Science R&D&I project that used the GIdLab to develop and test a solution for virtual collaboration management. LINEA (https://www.linea.gov.br/) is a Brazilian initiative supported by three National institutions: the National Observatory (ON), the National Laboratory for Scientific Computing (LNCC), and the National Education and Research Network (RNP), created to provide researchers from the astronomy area ways to survey with large volumes of data. This GIdLab project developed for LIneA explored ways to enable virtual collaborations and their management, including user lifecycle under these collaborations, and a proxy solution for integrating federated authentication and Social Logins to LIneA services. This project was concluded in six months, with activities from the developers from LINEA executed according to the guidance of the GIdLab team.
The breaking news is that we would like to promote the GIdLab to be an international Experimentation Lab. Anyone interested in contributing, please feel free to contact us during the event.
A growing number of Service Providers (SPs) require their users to sign in using Multi-factor Authentication (MFA) to ensure that SP-provided resources are securely accessed. However, federated Identity Providers (IdPs) are still evolving in their support for MFA. A flexible bridge solution is needed.
To address this challenge, the NIAID Discovery and Collaboration Platform (NDCP) developed a Dynamic MFA solution that uses campus MFA assertions when available and NDCP MFA when not. This solution combines three powerful tools: 1) PrivacyIDEA for token management and runtime authentication, 2) COmanage for NDCP MFA registration when IdPs don’t provide MFA, and 3) SATOSA for SAML. assertion and flow management. MFA-secured authentication from IdPs can be used directly even if the IdP does not signal it, and can automatically adjust when an IdP starts signaling MFA.
Join us to learn why Dynamic MFA is essential for Virtual Organizations looking to leverage federated MFA, and how to make it work. Presenters will cover implementation and code release, the MFA deployment process, and challenges/lessons learned along the way.
Several major R&E HPC centers and international HPC consortia are migrating from legacy authentication infrastructures to token-based authentication and authorization methods. The 2020 and 2021 Workshops on Token Based Authentication and Authorization, hosted by TAGPMA and the NSF-funded SciAuth project, brought together several stakeholders, developers, and implementors of token-based authentication and authorization infrastructure, to present and compare their migration efforts under way, and to promote common, interoperable methods. A third such workshop is planned for late 2022. This panel will invite prior workshop presenters and stakeholders to summarize their efforts to date, to present lessons learned and future plans, and to discuss best practices for interoperability. The goal is to broaden awareness of these significant authentication, authorization and identity infrastructure migration efforts, and to encourage participation and collaboration in standards development and interoperability assurance for token-based AAI deployments.
-2021 Workshop on Token-based Authentication and Authorization
-2020 Workshop on Token-based Authentication and Authorization
Throughout 2022 CACTI (Community Architecture Committee for Trust and Identity) has been tracking and assessing trust and identity technologies to identify their impact and opportunities they may present. This process helps focus limited resources and serves as genesis material for new working groups and adds depth to existing activities. CACTI is taking this work to another level as a culminating outlook report on trust and identity in 2022 and beyond. This presentation will be an overview of the key areas of the report and offer insight to the changing T&I landscape and offer opportunities for Q&A during the session.
While SSI has its roots in the US, a growing awareness in Europe recently spiked as the revised EU eIDAS legislation puts SSI-based technology in the forefront of the minds of decision makers and technologists alike. With large scale pilots with wallet technology being planned and through among others the technology driven European Blockchain Services Infrastructure (EBSI) activity, the EU aims to set the stage for a digital wallet for every European by 2024.
For the European academic trust and identity community, these developments present many challenges like how does this SSI ecosystem relate to Federated Identity and what benefit does using a blockchain offer, but also opportunity: does this perhaps help us save cost when distributing diploma information, can we build a persistent, long lived identity for R&E? Can we now finally get rid of these proxies?
Several European NRENs are already involved in the EU projects, and their involvement is likely to grow with the extended EU funding. Also existing projects like GÉANT 4.3 have been looking into SSI technologies, and the upcoming GÉANT 5 project will feature a separate work package on SSI.
This presentation will provide an overview of ongoing work on SSI by various EU NRENs and the GÉANT project. It will describe R&E SSI use cases identified as part of work done in the GÉANT 4.3 project, and will look forward to pan European developments and how EU NRENs are engaging with this, discussing some of the dilemmas and opportunities that are presented by the EU digital wallet.
We will briefly review the implementation of the second iteration of InCommon Baseline Expectations for federation entities, and open discussion of next steps to enhance security, interoperability, and scalability among federated entities.
Are there additional requirements that should be required of all entities (a v3 of Baseline Expectations)? Or should the focus be on explicit best practices or “badges” to identity an entity’s conformance to one or another good practice? What features or behaviors would increase interoperability or scalability (examples might include signaling use of MFA or “strong” authentication, identity assurance level, or conventions for release of attribute bundles). But bring YOUR examples! What practices would increase the value of federation to YOUR institution?
The Big Ten Academic Alliance with help from friends from other InCommon institutions have been hard at work to address the lack of standards in provisioning and de-provisioning practices. We’ve developed a best practices cookbook with recipes for provisioning and de-provisioning to help everyone from institutions just setting up an identity management framework to those who have a well-established one in need of a tune-up. Attend this session to learn about the cookbook and get a first look at what it contains. Bring your provisioning puzzles with you to put our work to the test.
Think about IAM Strategically and REALIZE the value! Join us and hear about what’s worked, listen to the people that made it happen (HE representative (s)) and take away lessons learned.
This session is to share the value of stepping back to take a cohesive look at not only where you are today alongside your technical goals for IAM but by considering the learners, process, staffing, etc., and how that couples with the technology to ensure a successful IAM System that meets the needs of all.
In July 2022 Erik Scott and Josh Drake published the Federated Identity Management Cookbook on behalf of CI Compass and Trusted CI. The Federated IDM Cookbook is meant to be a resource document for research cyberinfrastructure operators looking to implement solutions or overcome challenges related to federated identity management in the NSF research community.
In this lightning talk, authors Erik Scott and Josh Drake will give a brief overview of the contents of the cookbook and the needs that led to its creation as part of the CI Compass/Trusted CI Identity Management Working Group, and touch on the ongoing work benefiting researchers and higher ed institutions at Trusted CI and CI Compass.
The Federated Identity Management Cookbook can be found at https://ci-compass.org/resource-library/publication-the-federated-identity-management-cookbook
AWS provides many different ways to support authentication approaches called “Single Sign On.” These all allow users to log in to AWS or AWS-hosted applications using their campus credentials, without needing to create a local user within the account or application. This session provides an overview of many of these approaches, with a discussion of some of the implications of using each one. Sample topics to be covered:
- Authenticating infrastructure level (e.g., console, CLI) access to AWS
–SAML integration with the IAM console
- Authenticating applications hosted in AWS
–Using Cognito with SAML
–Authenticating to AWS “end user compute” services (e.g., Appstream, Quicksight)
–Hosting Shibboleth SPs in AWS
–Using OAuth/OIDC and custom OPs
- Authenticating infrastructure level (e.g., console, CLI) access to AWS
The number of digital identities possessed by a single user significantly increased over the past years. The problem is that users want to access services regardless of the identity they use. Account linking is a process of joining users’ identities that comes forward as a solution to the problem.
However, even today, the whole process is often handled either manually by an administrator or using a user-driven approach that is typically too complicated for users to navigate through smoothly. Users need a straightforward interface that will guide them step-by-step through the whole process.
Fortunately, using the OIDC protocol, we can easily integrate account linking directly to existing web applications. This design leads to a secure self-service solution with the desired improvement of the overall user experience.
This lightning talk introduces challenges in the account linking process and describes how they can be addressed using the OIDC protocol.
The original model of federated identity, a user interacting directly with a resource provider using a federated identity provider, is being challenged by the rise of middlethings. These things – portals, proxies, science gateways, complex content platforms, etc. – have sprung up in the trust ecosystem and offer some real benefits such as scaling and integration, but also create significant issues. Middlethings themselves have a diverse set of features, from simple protocol translation to management of attributes, brokering of trust, and diverse resource aggregation. But privacy and security concerns such as incident handling and data minimization are affected by the technical and compliance nature of these intermediaries.
This session will examine the nature of middlethings, the issues they raise in a federated landscape and how to accommodate them. It may touch on issues around the end-entity approach, other meta-data options, alternative registrars, new codes of conduct, or wherever else in this space the speakers want to go.
View the the related Framing a Discussion to Foster SP Middlething Deployments paper.
This talk’s focus is on how the OmniSOC uses its vast collection of differing data sources (evidence libraries) for detection of malicious activity. We will discuss these different data sources in the context of collected evidence and MITRE ATT&CK, and explore how Detection use cases are developed to detect attacker activity for relevant threats.
This panel discussion will focus on how Internet2, REN-ISAC, and EDUCAUSE work collaboratively and independently to provide support for cybersecurity professionals and the higher education community. The presenters will provide an update and showcase initiatives, projects, and resources each organization provides to the community and how they all collaborate.
This session will provide an overview of Internet2’s Routing Integrity Initiative. This initiative seeks to improve the routing integrity of the Internet2 ecosystem by providing:
- actionable routing security reporting
- routing security assessments for connectors and universities
- educational outreach
- updated modes to manage routing policy expressed to Internet2
This talk aims to present a first-of-its-kind analysis of Internet denial-of-service (DoS) attacks that are carried over IPv6. Volumetric attacks such as TCP SYN floods and amplification/reflection-style vectors are well-known, but we typically attribute most if not all attacks to IPv4 transport. We will show that IPv6-based attacks are not only plausible, they do occur with regularity. Based on our rich attack alert data from networks around the globe we measure how prevalent IPv6-based attacks are, and how they compare to IPv4-based attacks. We also highlight some areas for hope and some of the concerns, helping to provide the I2 community with real-world experience so they can be better informed and prepared for the iPv6 threat landscape of today and tomorrow.
Alongside NGI, we have taken the opportunity to architect and implement a greenfield secure management network. Ryan Harden and Adair Thaxton will discuss the architecture, support scripts, and secure design of the management network.
In this presentation, higher education professionals will learn from the experiences of the Trusted CI and ResearchSOC teams about the unique cybersecurity needs of research projects on campus. Trusted CI, the NSF Cybersecurity Center of Excellence, is now in its 10th year of operation, and over that time has performed 64 in-depth 6-month cybersecurity engagements with NSF-funded projects on campuses across the country, including 10 NSF Major Facilities. ResearchSOC is now in its 4th year of operation and is providing cybersecurity services to multiple NSF Major Facilities, along with broader outreach and training to the higher education community about cybersecurity for research projects.
Given these engagements, Trusted CI and ResearchSOC have insight into cybersecurity needs of research projects, including cybersecurity policies and technical controls. Key findings include the mismatch between cybersecurity controls focused on data confidentiality (such as NIST 800-171) for many open science projects, for which availability and integrity of open research data and cyberinfrastructure is a greater concern. Another key finding is the value of support from the campus cybersecurity and research computing teams for the success of these research projects. The presentation will highlight successful examples of these campus collaborations in addressing threats (e.g., ransomware) and enabling secure collaboration (e.g., via Science DMZs).
The REN-ISAC offers cybersecurity assessments of research and educational organizations completed by higher educational cybersecurity professionals. With experience from dozens of assessments, there are many learnings that can be applied to all organizations. This session reviews collective trends, recommendations, and priorities discovered by REN-ISAC’s assessors and focuses on practical advice to help improve cybersecurity programs. This session covers the top recommendations provided; how institutions have used the assessments to advance their cybersecurity programs; and practical advice on getting the most leverage from assessments and audits to support cybersecurity programs.
The buzz around Zero Trust is amplifying across the higher education community as institutions look to fortify their cyber defenses against threats, like ransomware, and adopt processes to meet the security challenges of remote work. The major issue is that there are too many pitches that claim to “solve” your zero trust needs. This is just not true. ZTA is a framework and requires focus and strategy for successful implementation. Your institution may be among those wondering how to make Zero Trust actionable, and how doing so can help you meet the data protection and security priorities outlined by EDUCAUSE in its 2022 Top 10 IT Issues list.
Join Palo Alto Networks and our NET+ partners for a discussion around removing the fear, uncertainty, and doubt around zero trust that includes real-world examples and tactics for ACTUALLY reducing risk and improving security outcomes.
Ransomware continues to victimize organizations around the world, taking money from other priorities, threatening data, and even causing service outages. The field of ransomware continues to evolve. Some of its victims are opportunistic while others are targeted. Response has also evolved, from playbooks to the purchase of cyber insurance. This session explores current ransomware threats and response and provides guidance for mitigating the impact of ransomware.
DNS is known to be one of the most widely abused protocols by the threat actors to use in unconventional ways to hide under normal traffic. Apart from threat actors DNS is being actively used or rather misused by many other service providers, vendors etc. to provide the intended services. An in depth research of the DNS logs collected over a long period of time revealed some very interesting legit use-cases of DNS protocol by the industry, apart from its normal resolution service.
We coined the term “Off label use of DNS” to represent those use-cases. One of the main reasons DNS is been used or rather misused for these off-label use-cases is the speed of data transfer and less overhead in terms of bandwidth. These off-label use cases of DNS leaks very important information about the clients and software they are running, and can be leveraged in variety of ways by the network security defenders/analysts to improve the detection on the network. This presentation will go over some of those legit off-label use-cases and how they can be leveraged by the analysts to detect malware trends in the network and much more just by analyzing DNS logs.
With cyber attacks increasing in frequency and sophistication, education leaders are rightly focused on how to secure their data and organizations and how to mitigate risks. In this session you’ll hear from leading institutions who have taken action to improve their cybersecurity while also assuring a high return on investment. We’ll also describe the Google Cloud Risk Protection Program. You’ll leave with insights about how to meet your cybersecurity insurance needs in educational technology so you can focus your efforts on what matters more — the learners.
We will present the rapid progress, vision and outlook across multiple state of the art development lines within the Global Network Advancement Group and its Data Intensive Sciences and SENSE/AutoGOLE working groups, designed to meet the challenges of the Large Hadron Collider and other science programs with global reach.
Since it was founded in the Fall of 2019 and the working groups were founded in 2020, in partnership with Internet2, ESnet, CENIC, ANA, RNP, CERN, StarLight, PRP/NRP, N-DISE, AmLight, and many other leading research and education networks and network R&D projects, the GNA-G working groups have deployed an expanding testbed spanning six continents which supports continuous developments aimed at the next generation of programmable networks interworking with the science programs’ computing and data management systems. The talk will cover examples of recent progress in developing and deploying new methods and approaches in multidomain virtual circuits, flow steering, path selection, load balancing and congestion avoidance, segment routing and machine learning based traffic prediction and optimization. The results of examples demonstrated at SC22 and under current development will be included.
Research Computing and Data (RCD) Professionals enable digital scholarship. As the importance of RCD Professionals has grown so has the need for organizations to support them and advocate for them. In this session, we discuss the road to a 21st Century RCD Professional society and work that the RCD Nexus is undertaking to build a Research Computing and Data Resource and Career Center. We will describe the community context and the roles that adjacent efforts currently play. We will look to the future of these efforts and engage the attendees to explore what these and other community efforts should provide to support RCD professionals as they work to advance research.Our work is currently funded as an NSF-funded Cyberinfrastructure Centers of Excellence (CI CoE) pilot (award NSF-2100003 – https://nsf.gov/awardsearch/showAward?AWD_ID=2100003). CaRCC was originally funded by NSF RCN award OAC-1620695 (PI: Jim Bottum – https://www.nsf.gov/awardsearch/showAward?AWD_ID=1620695) “RCN: Advancing Research and Education through a national network of campus research computing infrastructures – The CaRC Consortium,” and continues largely through volunteer efforts.
The Ecosystem for Research Networking ( formerly the Eastern Regional Network) has partnered with the Rutgers CryoEM & Nanoimaging Facility (RCNF) to launch the CryoEM Federated Instrument Pilot Project whose goal is to facilitate inter-institutional collaboration at the interface of computing and electron microscopy by removing many of the barriers encountered when accessing these resources.
Instruments in the RCNF include a Thermo Fisher Scientific (TFS) Talos Arctica transmission electron microscope (TEM), indirectly attached to the university network via a bridging PC, with limited remote access behind the university’s VPN and a secured VNC server. This project is tasked to deploy an easy to use, secure, web-based resource portal providing remote federated authorized access to the lab’s TEM; workflows with real-time monitoring for experiment parameter adjustments and decisions; local edge computing image pre-processing of raw data; and additional analysis on either Rutgers’ private HPC cluster, Amarel, or public cloud resources. This implementation is based on The ERN Federated OpenCI Lab’s Instrument CI Cloudlet design. The generated ecosystem will foster team science and scientific innovation, with emphasis on under-represented and under-resourced institutions, through the democratization of these scientific instruments. We will discuss efforts undertaken leading to the current state of deployment of this actively developing project.
Kubernetes has continued to climb in popularity across numerous environments, found in commercial clouds, campus networks and even homelabs. This session will explore leveraging the Nautilus Kubernetes cluster, created initially by the Pacific Research Platform and now growing and being operated by a variety of NSF and community projects.
This session specifically addresses two activities within the Great Plains Network region:
1. Providing a frictionless (or rather low friction) path for a research/education environment primarily focused on specialized JupytherHub instances.
2. The ancillary impact for campuses and RENs of adding to Nautilus by gaining a low maintenance, orchestrated perfSONAR environment.
Universities across the Great Plains region have worked together for decades to support one another in gaining access to and leveraging research cyberinfrastructure. In the past few years, several universities have come together to develop an NSF-funded Cyber Team, engaging with under-resourced universities to improve aspects of data movement and scientific workflows and further the advancement of a professional development mentorship framework. This rapidly led to an “expansion-pack”, of a CC* Compute project led by Kansas State University that built upon the community aspects of the well-connected institutions to simplify the process of leveraging the Open Science Grid, providing compute resources to smaller institutions, and most importantly, helping onboard existing high-performance computing resources to the Open Science Grid.
This session is a panel led by PIs of the lead institutions and technical leadership of the projects that have helped highlight that science is a team sport.
Massive scientific data flows are extremely time sensitive yet fragile as they are dependent on system capabilities and transient characteristics of the infrastructure.
To achieve guaranteed high disk-to-disk throughput between end systems, the DTN hardware, OS parameters, software/orchestration stack, data transfer protocol and file management algorithm need to be customized as per use. No one size fits all.
As part of the winning solution for the Data Mover Challenge presented at SC Asia 2022, we developed ‘Optimized DTN as a Service (ODaaS)’ presented by Team Ciena-iCair-UETN.
Some of the characteristics of ODaaS include:
- Ability to tune DTNs (OS, Network, storage) to an optimal configuration based on capabilities of the installed hardware.
Implement a data transfer protocol that reduces I/O overhead and directs data transfers to end process.
- A data management entity that optimizes file sizes to achieve maximum throughput.
- Security and identity management systems compatible widely in the industry.
- Real-time predictive analytics and system monitoring capabilities that can advise and help fine-tune the transfer strategy.
- Ability to tune DTNs (OS, Network, storage) to an optimal configuration based on capabilities of the installed hardware.
The use of computing in science and engineering has become nearly ubiquitous. Whether researchers are using high performance computers to solve complex differential equations modeling climate change or using effective social media strategies to engage the public in a discourse about the importance of Science, Technology, Engineering, and Mathematics (STEM) education, cyberinfrastructure (CI) has become our most powerful tool for the creation and dissemination of scientific knowledge. With this sea change in the scientific process, tremendous discoveries have been made possible, but not without significant challenges.
The Research Innovation with Scientists and Engineers (RISE) team was created to address some of these challenges. Over the past two years, Penn State Institute for Computational and Data Sciences’ (ICDS) research staff have partnered with RISE CI experts who facilitate research through a variety of CI resources. These include, but are not limited to, Penn State’s high performance computing resources (Roar), national resources such as the Open Science Grid and XSEDE, and cloud services provided by Amazon, Google, and Microsoft.
Using funds provided by the National Science Foundation (NSF) CC* program, the RISE team has had direct engagement through multiple activities that benefit research projects conducted at Penn State. In addition, the RISE team has conducted seminars, workshops, and other training activities to bolster the cyberinfrastructure literacy of students, postdocs and faculty across disciplines. The RISE team has grown as a workforce shared across investigators who have consulted on projects both large and small. We show that the RISE team has already paid substantial dividends through increased productivity of faculty and more efficient use of external funding.
Distributing in-network computing (DinC) is an emerging topic in deep programmable networks. The current in-network computing literature [1-3] explores a minimalistic set of tasks that fits the memory and processing constraints of a single node (e.g., programmable switch, SmartNIC, etc.). Often, a realistic computational task needs more resources than what is available on a single node. Therefore, the original task must be subdivided and distributed among available resource. Furthermore, such many tasks must be hosted simultaneously. There is very little mindshare on how to achieve this goal. Alternate computing models such as one to many, many to one or many to many are unexplored in this context. This constitutes a promising area of future research.
The computing model for a programmable switch (or SmartNIC) is based on (i) reconfigurable match-action tables that are data driven and (ii) the rest of the control flow captures the invariants. Here, we consider an artifact that encompasses both instructions and table entries. Given an artifact and a network topology, the end goal is to distribute the artifact onto the network nodes and ensure its smooth operation. The intermediate steps as we envision involves (i) distribution as an optimization problem satisfying constraints and objective(s), (ii) design choices in computing the distribution, (iii) deployment and verification of distribution, (iv) detecting and handling normal and abnormal events post-deployment, and (v) optimizing the DinC in tune with changing network conditions. Each of these areas present a host of challenges.
This session will discuss the following (not limited to) topics:
-Formal definition and analysis of the artifact distribution problem
-Automated and semi-automated distribution of the artifact
-Challenges in deployment of artifacts to network of DinC nodes
-Reconciling the conflicts between computational path and routing path
-Isolating DinC path/node/slice failure from the rest of the system
-Verification and troubleshooting of DinC deployment and post-deployment
-Resiliency to network failures and attacks
High-performance data transfer is a critical enabler of data-intensive science. Following a successful effort to establish routine performance levels in excess of 1 Petabyte (PB) per week between production Data Transfer Node (DTN) clusters at four supercomputer centers, ESnet, the Engagement and Performance Operations Center (EPOC), and Globus have organized the Data Mobility Exhibition (DME). The DME demonstrates data transfer capabilities between campus DTNs and national facilities, provides a collaborative environment for improving data transfer performance for campus DTNs, and data mobility scorecard to baseline performance of campus data architectures. This talk will describe the DME, progress made to date, and the benefits of participation for interested campuses.
Many smaller, mid-sized and under-resourced campuses, including MSIs, HSIs, HBCUs and EPSCoR institutions, have compelling science research and education activities along with an awareness of the benefits associated with better access to cyberinfrastructure (CI) resources. These schools can benefit greatly from resources and expertise for cloud adoption to augment their in-house efforts. The Eastern Regional Network’s (ERN) Broadening the Reach (BTR) working group is addressing this by focusing on learning directly from the under-resourced academic institutions in the region on how best to support them for research collaboration and advanced computing requirements.
In this talk we will present findings and recommendations for smaller institutions on challenges and opportunities of exploring the cloud for research and education. These findings are based on engagements with the community, including results of workshops and surveys.
To foster greater and more consistent use of the new 100 Gbps connections that is being deployed in the national RNP backbone, the e-Cyber project aims at delivering high-performing services to the most infrastructure-demanding research centers in Brazil. To do this, the project is getting inspired by the “superfacility” concept, which is adopted by initiatives like GRP (Global Research Platform) and EOSC (European Open Science Cloud). However, one of our biggest challenges is to engage the client institutions and bring them to co-create solutions and participate in the project governance.
In this session, the eCyber project will be presented and the audience will be invited to take part in a debate on how to manage a superfacility, covering aspects like governance, sustainability, business model, and research engagement.
Research Computing groups have established themselves as a one-stop shop for researchers needing computational, storage, and consulting services. However, the growing adoption of cloud technologies in research workflows and the need to meet mandated security standards has left researchers searching for alternative support. By joining forces with cloud and security professionals, Research Computing groups can cast a wider net to engage with researchers and meet their needs. While a sound theory, in practice this is challenging given the disparate backgrounds these groups have. Questions often arise about if the groups are offering competing services, what the intended audiences will be, and what level of expertise we can expect them to have. In this lightning talk, Research Computing at the University of Colorado Boulder presents the challenges encountered when trying to present a wide variety of services to engage with researchers and how the attempts have gone.
The team led by the University of Illinois at Urbana-Champaign provides operations and integration services as part of the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program. Science and engineering research and education increasingly relies on a seamlessly integrated, secure, and robust ecosystem of advanced computing and data technologies and human expertise. The COre National Ecosystem for CyberinfrasTructure (CONECT) project delivers innovative integrations across the NSF-funded cyberinfrastructure ecosystem in the areas of operations, data and networking, and cybersecurity. CONECT uses flexible mechanisms to integrate the rapidly diversifying set of non-traditional resources that previously have been left on the fringes of the national cyberinfrastructure. CONECT builds upon the successes of its predecessors and moves toward a more agile, dynamic, and inclusive ecosystem.
This session will describe CONECT in support of a diverse NSF research which utilizes Internet2’s advanced network services serving many institutions that are members of the Internet2 community. CONECT introduces and coordinates a framework-based approach to ecosystem-wide integration using step-by-step integration roadmaps and modular concepts that enable entirely new kinds of cyberinfrastructure resources to fully participate in the coordinated national cyberinfrastructure. This project will focus on enabling cyberinfrastructure components to discover and securely access integrated resources and other components; increased network reliability enabled by expanded metrics insights and availability; opportunities for more control over data transfer such as transfer scheduling and network programmability; new Cybersecurity Governance Council and coordination groups; modern authentication strategies and technologies; and focus on broad protection of ACCESS resources through the sharing of cybersecurity threat intelligence. Through these innovative efforts, CONECT will democratize participation in the cyberinfrastructure ecosystem, deliver more value at all layers of the service and support stack, and enable the entire ACCESS program to achieve success in advancing cyberinfrastructure in all areas of science and engineering research and education.
Fundamental to any networking organization is collecting and understanding measurement and monitoring data. The collection aspect has a wealth of data sources available about the network including flow records, interface counters, active measurements and much more. Understanding the data though can still be a considerable challenge, especially when one considers all the potential questions to be asked about a network. Below are just a few example of the types questions asked, each with very different audiences:
- How do customers know if the network is getting the performance that they should expect from it?
- How do engineers know which organizations are driving traffic on congested links?
- How does management demonstrate the value required to make the business case for the network to funding sources?
This session will be a Q&A with leads from three common tools used for measurement and monitoring on R&E networks: perfSONAR, NetSage, and Stardust. The panel will discuss how they are addressing the challenges of making network data more useful. They will also highlight how the represented projects and the concepts behind them can be beneficial to the broader R&E networking community and promote a conversation as to how best share ideas going forward.
Speakers: Ed Balas, ESnet; Mark Feit, Internet2; Andy Lake, ESnet; Jennifer Schopf, TACC