Conference & Workshop programmes
The format of the conference will be dynamic and interactive, engaging the audience and ensuring lively and animated discussions. The opening day includes plenary sessions on e-Infrastructure for high-energy physics, digital humanities, and cloud computing, some sessions will include panel discussions.
The second and final day includes sessions on data services and technologies, and bio- and medical sciences.
In the days before the conference, workshops on center operations, data services, science gateways, security, and services for bioinformatics will be hosted by the Norwegian University of Science and Technology.
Below you will find the programme for both the workshops held on 13 - 14 May and the conference which will run on 15 - 16 May.
For easy viewing of the conference part you can collapse the workshop days by clicking the hexagon at the right-hand corner of the workshop day programme.
Workshop & Session descriptions
Click the show/hide icons to read more about the workshops and the sessions.
Please note that all titles are tentative
Since 2009 the University Center for Information Technology (USIT) at the University of Oslo (UiO) has seen an increased demand for services for sensitive data. This is mostly data covered by the Personal Data Act §2, point 8 (religion, sex, health, union membership and prosecutions). The increased usage of video, MR--?imaging and DNA--? sequencing of humans has created an incredible need for storage and computing resources for sensitive data, by far exceeding the available resources of the classic “single offline computer dedicated to sensitive data”. To meet this demand USIT has run a project called Services For Sensitive Data (TSD) since ~2008. This project will by launching version 2.0 be offering virtual servers, storage, high performance computing and data collection within a secure environment in version 2.0. The system is based on hosting virtual research servers behind a FreeBSD 2--?factor authentication gateway. All projects are VLAN separated, and storage is provided by the new 7PB storage resource “Astrastore” at UiO. A dedicated HPC--?resource is currently being installed inside the secure environment to meet the computational needs. To enable secure data harvesting we have enabled PGP encryption of the UiO web--?questionnaire “Nettskjema”. Further, to enable time--?point studies and to identify respondents correctly we have an option of using the governmental ID--?portal. As login to the web--?questionnaire. USIT plan to offer this services to the research communities by summer 2013.
The concept of traditional system administration of large High performance Computing operation, where all hardware is close to users and administrators, has changed in recent years.With the evolving high-speed network connection between the countries, such hardware can be hosted away from users and system administrators, which are transparent to the system. National High Performance Computing centres of Denmark, Norway, Sweden and Iceland own and have operated jointly a supercomputer in Iceland to share computational resources across country boundaries since 2011. The main objective of the joint ownership is to make the investment and cost of operation cost efficient without sacrificing service to users. The system consists of 3456 cores, 71TB storage, 7TB memory and is run by four system administrators from four different countries. Nordic High Performance Computing has set an example of an innovative concept for the HPC operation where technical administrators reside in different parts of the world and yet the HPC operation is optimal, secure and reliable. This presentation will give an overview of the project and lessons learned.
With the establishment of the strategic research area for e-Science in Sweden, additional funding for advanced user support was made available. Specifically, the Swedish e-Science Research Centre (SeRC) prioritized funding of a number of so-called application experts. The application experts are all affiliated with an HPC center, and the majority is affiliated with NSC and PDC, which are the centers that participate in SeRC. In this talk, I will present some of the recent history regarding application experts, as well as the coordinated efforts among the application experts. I will also present examples of work and projects performed by application experts in various scientific domains.
“CSC – IT Center for Science Ltd. is building one of the most eco-efficient data centers in the world. The location is Kajaani, in Northern Finland. The Kajaani Data Center is a proven solution based on technology, modern, reliable infrastructure and ecological efficiency for data needs in research and development in public and private sector. The Funet Network (Finnish University and Research Network) ensures excellent networking capabilities around the world
MetCoOp has run as a project since August 2011 with the aim to facilitate an operational organisation for the Swedish Meteorological and Hydrological Institute (SMHI) and the Norwegian Meteorological Institute (met.no) for production of numerical weather prediction from March 2014. The vision for the project is to deliver the best short-term weather forecast for common areas. Numerical weather forecasting is resource demanding and the quality of the forecast is important. Global models are getting better, and it is a challenge to produce better forecast on a finer grid and on shorter timescale. I will present the status of the project (what is operational numerical weather prediction about?), how it is to co-operate across the borders, share the high performance computer system and something about future plans.
The EU funded project ScalaLife has created a cross-disciplinary Competence Centre that provides one-to-one support to HPC users (efficient usage) and developers (code analysis/profiling) of packages such as the widely used codes GROMACS and DALTON. Support is provided also to resource providers (HPC centers) with proper installation, benchmarking and second line support to users of those centers. Training events are regularly organized too. The Competence Center establishes a long-term sustainable structure and welcomes collaborations with external communities and projects
Computation is entering more and more fi elds of science, it is the third scientifi c method, right next to theory and experiments. But computational tasks and resources are becoming more and more complex, moving in the opposite direction of the entry-level skills of new users. How can we bring computation to new sciences? The answer: computing portals.
Workshop presentation -Science Gateways
1.1 Lead in: Introduction to ELIXIR and IaaS for Life Science/Biomedical service providers • Nordic ELIXIR community (Bengt Persson, BILS & Uppsala) 15 min • Cloud offering for Life Science from ELIXIR FI @ CSC (Tommi Nyrönen and Jarno Laitinen, CSC) 20 min 1.2. Current scientific use cases: How ELIXIR FI cloud has been integrated to operations • ELIXIR DK Bioinformatics tools use case (Kristoffer Rapacki, CBS Technical Univ. Denmark) 20 min • Demonstration (Emil Rydza, CBS) • ELIXIR NO Bioinformatics use case (Kjell Petersen: CBU Univ. Bergen) 20 min Lead-in to Session 2. Setting up the Working session: analysis of delivering e-Infrastructure in the way outlined in the Session 1.
2.1 Strengths and Opportunities Task: Discuss strengths and opportunities of the IaaS interplay between biomedical service providers and e-Infrastructure providers in the national and Nordic setting. Raised points can be technical! Write these on (big white sheets of) paper provided and tape them to the wall of the main room for presentation. • Choose a presenter and secretary (can be the same person) • Work in (five) groups for 15 minutes in dedicated meeting rooms • Reassemble • Present findings 5 min • Decide if the mixture of people in the groups needs to be changed 16.15 Short 5 min break, re-organise in group (rooms) 2.2. Weaknesses and proposed Actions Task: Summarise weak points and risks in the IaaS interplay between biomedical service providers and e-Infrastructure providers in the Nordic setting, and suggest actions to mitigate them. • Choose a presenter and secretary (can be the same person) • Work in (five) groups for 15 minutes in dedicated meeting rooms • Reassemble • Present findings 5 min followed by short discussion
The Nordic collaboration on e-Infrastructures will be presented. The background of the collaboration will be described, as well as the current status and emerging opportunities.
By using examples from my own field of research in chemistry, from recent advances in PRACE Tier-0 projects as well as from the Scientific Case for HPC In Europe, I will demonstrate the potential for high-quality research made possible by the use of High-Performance computing. A brief discussion will also be given on the needs of scientists in terms of how HPC infrastructure is organized and utilized in terms of providing the best foundation of scientific excellence.
This talk is a tour through more than 10 years of Grids, e-Science and e-Infrastructure. What was the rationale behind NDGF and the Nordic WLCG Tier-1. What are the possibilities for broadening its success to other sciences and what are the greatest future potential in Nordic e-Infrastructure collaboration.
ATLAS experiment at LHC has been the key user of the Nordic computing and storage resources ever since NDGF came into operation. ATLAS requirements drive the development of the distributed Nordic Tier1 and Tier2 centers. This talk gives an overview of computing challenges faced by ATLAS, and plans for operations after the re-start of LHC in 2015.
The European Grid Infrastructure was established in 2010 as a result of a community consultation to provide a sustainable model for open computing and storage in Europe based on the prototyping experiences of the previous 10 years. The presentation will provide a highlight of EGI's current activities in support of WLCG and how the experiences of the last three years of operation are informing our future plans.
Language is the fabric of the Web, and language technologies arguably provide the grease for the weaving loom, evidenced for example by automated on-line translation, spoken-language interfaces to mobile devices, or the advertizing and content recommendation systems that drive monetization of Web services, and thus availability at no charge to the end user. In this presentation, I will give a high-level impression of core techniques used in a variety of language technologies, with special emphasis on their computational properties. Then I will review my own experience, and that of my research group at the University of Oslo, in migrating from operating a dedicated server farm in the basement of our department, to taking advantage of a national ‘throughput’ supercomputer, the ABEL cluster at Oslo. As a direct consequence of this happy development, the research profile of the group today is far more computation-heavy than would have been possible otherwise, and we work experimentally and empirically on a scale that would have been impossible to imagine five years ago.
Hans Jørgen Marker
I will describe Digital Humanities, achivements in the field, curernt challenges, and opportunities for researchers in the Nordic area to work together across both Humanities and ICT.
Towards the Clouds, Together Collaboration on cloud services in research and education Cloud services offer the Research and Education sector huge opportunities. The cloud empowers users to select and use the services they really want, in an easy and often economically attractive manner. Cloud services offer higher education and research organisations the opportunity to become more agile and provide their users with a wider range of relevant IT services, at a faster pace. IT departments can use the instant availability and elasticity of cloud services (rapid expansion or contraction of capacity) to reduce development time and modify their expenditure profile. Thus reducing the need for periodic and large capital expenditure (CAPEX), and moving to a smoother more predictable operational expenditure (OPEX, pay-per-use model). The standard delivery of cloud services by commercial organizations however, is often incompatible with the requirements of higher education and research. There are significant challenges on trust, security, privacy, legislation and regulation. These issues have different implications between cloud services used in a private capacity, compared to services used within a research environment, where the ownership of data and the need to ensure strong custodial control are paramount. There are also issues regarding data portability and interoperability. Vendors have a commercial imperative to maintain users and reduce churn within their user base and so have little incentive to collaborate with competitors on these issues. These are cross-border phenomena which have a major impact on the research and education community. It is therefore essential that higher education and research collaborate on a European level, so that the benefits of the cloud can be fully realised and the attendant risks are fully understood and appropriately managed. By presenting a united front, the R&E community can work to guide and influence cloud service providers in these areas. This presentation will highlight how these risks can be mitigated and managed through a coordinated approach and implementation of a range of Best Practices across the community. In addition, through developing a range of procurement guidelines, collaboration can reduce the learning curve for brokering these services and minimise duplication of standards and policies. About GÉANT As the pan-European data network dedicated to the research and education community, GÉANT connects 40 million users to the internet. Through its innovative access and authentication services of eduroam and eduGAIN, GÉANT has long experience in the fields of user access services and federated service authentication and delivery. In GÉANT, 34 National Research and Education Network organizations collaborate on the cloud and address topics like cloud strategy, standards, interoperability, privacy and security, cloud brokerage and procurement, vendor management and integration. The presenter will share how GÉANT supports higher education and research on both a strategic and practical level to: • Get a better understanding of the full range of cloud computing solutions and their capabilities and limitations. • Incorporate cloud activities in their roadmaps and portfolios including developing service models for access to brokered services. • Facilitate their user-base in adopting the cloud with the right conditions of use, through development of a range of Campus Best Practice guidelines. The audience will be invited to comment on these initiatives and relate these to the situation in their institution. Attendees will learn about the tools, instruments, and the approach to help their institute respond to and benefit from the cloud. The presenter will reserve ample time for discussion to gather input for the joint European cloud activities in GÉANT.
This talk will introduce Synnefo, a complete open source cloud platform. Synnefo powers GRNET's ~okeanos public cloud service, an IaaS cloud delivering advanced compute, network and storage services to the Greek research and academic community, since 2011. The talk will cover our experiences with building and running a large-scale production public cloud, focusing on: * Open source vs. commercial software in building a large-scale, production cloud infrastructure * Building on commodity hardware vs. vendor solutions * Current open source solutions * Identifying the building blocks of an IaaS cloud, and re-using existing opensource components wherever possible * Why Synnefo? * General Synnefo architecture, software components used * Running robust, fault-tolerant VMs without a Storage Area Network * Using a content-addressable file storage service as the Image repository * Unified storage of files, VM Images and live VM disks, independently of the backend storage technology * Thin VM provisioning, with zero data copy and live VM migration * Current Industry and Open Source Community use cases of Synnefo * Production readiness/Scalability/Maintainability on commodity hardware
This presentation will first consider the underlying driver (yes, just one!) for e-Research infrastructure. It will then look at the changing nature of research and research communication. The kinds of services that are needed to support this will be surveyed, and the presentation will conclude by examining how one might provide these at a national or regional level.
The EISCAT_3D project: Data and processing challenges and implications for Nordic e-infrastructure Ian McCrea: STFC Rutherford Appleton Laboratory, UK The EISCAT_3D project (www.eiscat3d.se) will be a large, distributed research infrastructure located in the Nordic region, with facilities in Norway, Sweden and Finland. EISCAT_3D will be a new type of radar facility for studies of the upper atmosphere and near-Earth space, replacing the current generation of dish-based EISCAT radars by a network of phased array antenna fields, offering considerably greater performance in terms of power, resolution and experimental flexibility. Realising such a system, however, presents several challenges, not least of which is the fact that the system will produce several orders of magnitude more data than the present radars. In order to extract the optimum performance, these data will need to be combined and processed in real-time, requiring the provision of significant computing and data transport capabilities to relatively remote locations. In this talk, we will briefly review the current design of the EISCAT_3D system, with a particular emphasis on the computing and networking requirements at each data processing level and how the various challenges are likely to be resolved. In the light of this, we will consider the potential implications for e-infrastructure provision in the Nordic region, in particular with regard to networking and long-term data storage.
Challenges in shared storage resources for large scale e-science projects are highlighted by exploring the differences of two common storage solutions found in different communities, dCache and iRods. Is there common ground? Are they exclusive? Is there even a need?
The objective of the NorStore initiative is to develop and operate a persistent, nationally coordinated infrastructure that provides non-trivial data services to a broad range of scientific disciplines. The key to achieving this is to describe and share the data. Discovery of data is facilitated by providing open access to meta-data. The launch of a national research data archive is one important step in this direction. The retrieval of restricted and public data is provided via autonomous technologies. The challenges and lessons learned will be discussed with a view that similar requirements among the Nordics exist and the link to initiatives like EUDAT.
EUDAT is a new pan-European data initiative bringing together a unique consortium of 25 partners, including research communities, national data and high performance computing (HPC) centers, technology providers, and funding agencies from 13 countries. EUDAT aims to build a sustainable cross-disciplinary and cross-national data infrastructure providing a set of shared services to access and preserve research data. The talk will provide an overview of the status and plans of the project and highlight the contribution of the Nordic partners as well as possible opportunities for engaging further Nordic communities in this pan-European initiative.