TECHNICAL SESSIONS

Yahya Bokhari
Associate Research Scientist, King Abdullah International Medical Research Center

Talk Title:
Application of Artificial Intelligence in Cytogenetics
Short Description
Laboratory image analysis is among the most important skills needed for the diagnosis of genetic diseases. Applying artificial intelligence methods and algorithms to lab-medical images not only helps in saving time but also leads to more accurate results. Chromosomal Analysis is one of the most difficult images that AI can be applied on. To auto-analyze the images, we used multiple AI methods to enhance and unify the image’s quality.”

Paul Brook
EMEA Director, Data Centric Workloads Specialists, Dell

Talk Title:
Data Analytics & AI in HPC
Short Description
Data is growing, AI is everywhere and HPC is converging with every emerging and disruptive technology you hear about. You know this, so this session will focus upon the Why this is happening and how you can accelerate your journey into the next generation of HPC. This session will spotlight how the complex data management process integrates into a modern HPC environment. We look ahead for the next generation of HPC environment where the data gathered at the edge, processed using AI and flows though a distributed HPC. AI and Data Analytics in HPC spans across hybrid clouds and innovative on premises cloud enabled HPC services. The future for HPC is amazing, the potential is huge. Join this session to get a closer look at the How as well as the why of Data Analytics & AI in HPC.

Andrew Grant
Global VP, Strategic HPC Projects, ATOS

Talk Title:
Road to Exascale
Short Description
Exascale computing is the term given to the next 50-100 fold increase in speed over the fastest supercomputers in use today. This super powerful machine is poised to transform modeling and simulation in science and engineering. It is hoped that the exascale machines will solve some or all of the major problems that currently challenging to overcome.
Join us with Mr. Andy Grant where he speaks about the implementation and application of such systems and the transition of HPC towards Exascale computing.

Bruno LECOINTE
VP Group Business support HPC AI Quantum at ATOS

Talk Title:
Challenges of Exascale and beyond
Short Description
The scale of today’s leading HPC systems, which operate at the petascale, has put a strain on many simulation codes. The current challenge is to move from 1015 flop/s (petaflop) to the next milestone of 1018 flop/s – an exaflop. It is crucial to note that hardware is not the only exascale computing challenge, but also software and applications. Such systems could have over a million cores, but also need to excel in reliability, programmability, power consumption and usability (to name a few).
Join us with Mr. Bruno Lecointe as he elaborates on the current and future challenges of such complex systems.

Abduljabar Alsayoud
Assistant professor, KFUPM

Talk Title:
HPC system and applications at KFUPM
Short Description
In this talk the existing and plan for new HPC system at KFUPM will be presented. The main applications requiring HPC capabilities will then be highlighted. Finally, I will present other HPC options for researchers at Saudi universities when the on-premises HPC is not enough.

Mohammed S. Alarawi
Research Specialist, KAUST

Talk Title:
HPC system and applications at KFUPM
Short Description
The volume of data generated from biological source had increased massively. Since the introduction of high throughput sequencing, imaging and screening platforms; the rate of digitalizing biology pushed computational resources to new limits for, compute, storage and data transfer. The secondary use of biological data increases the value of research money. The number of algorithms and tools developed to analyze biological data is increasing rapidly. The projection of zettabyte of raw data, nonetheless the intermediate analysis results is the near future, as major databases are doubling every 12-18 months. This plays a major role into pooling resources and develop strategy to enable best practice use of data and resources. Biological/biomedical research within Saudi Arabia need focus in exercising fair use of data and fair access to HPC resources to further achieve the goals of improving human life by answering fundamental research questions

Orlotti, Edmondo
AI Business Development Manager, Hewlett Packard Enterprise

Talk Title:
SUPERCOMPUTING FOR THE EXASCALE COMPUTING ERA
Short Description
Cray AI Development Environment is a machine learning training platform that makes building machine learning models fast and easy. The software platform enables Machine Learning Engineers and researchers to:
- Train models faster using state-of-the-art distributed training: by provisioning machines, setting up networking, optimizing communication between machines, efficient distributed data loading, and fault
tolerance. - Automatically find high-quality models with advanced hyperparameter tuning: including state-of-theart algorithms developed by the creators of Hyperband1 and ASHA2
- Efficiently utilize different accelerators (e.g. GPUs): with intelligent and configurable resource management.
- Track, reproduce, and collaborate on experiments: with automatic experiment tracking that works out-of-the-box, covering code versions, metrics, checkpoints, and hyperparameters.
As an end-to-end training platform, the system integrates these features into an easy-to-use, high performance Machine Learning and Deep Learning environment that can be deployed on bare metal, Kubernetes, or the cloud, supporting the largest providers such as AWS, Azure, and GCP.

Rashid Mehmood
Director of Research, Training, and Consultancy, HPC Center, King Abdulaziz University

Talk Title:
Smartization of Societies: High-Performance Ingredients and Examples
Short Description
Smartization of our societies and living spaces could enable a sustainable future for us humans due to its data-driven analytics approach and its focus on the triple bottom line (TBL) — social, environmental, and economic sustainability. Precisely, smartization relies on collecting data and making informed decisions on policy and action using cutting-edge technologies such as the internet of things (IoT), big data, artificial intelligence, cloud, fog, edge, and distributed computing.
In this talk, I would review some of our research at KAU on bringing innovation through smartization of our environments.

Sunday Olusanya Olatunji (Aadam),
Associate professor, Imam Abdulrahman Bin Faisal University

Talk Title:
Hybridized ‘HPC – Ensemble ML’ Towards Making Data Speak More Clearly: A Unique Paradigms Union as panacea for Improved Medical Research & Solutions
Short Description
The continuous need for improved medical research cannot be over emphasised considering the threat of different variants of diseases, known and unknown, that continued to plague the World due to several reasons. Artificial Intelligence techniques have been gaining ground, especially different variants of machine learning algorithm for effective and early diagnosis of various diseases. However, there have been several challenges mitigating against taking full advantages of these latest AI technologies, and foremost of these is the “heterogeneous nature of patient healthcare records in formats and location”. Presently researchers mostly focused on one of the specific types of datasets (e.g. clinical data, CT scan output, MRI, notes, etc.) to develop the target predictive solution or any other AI based solutions. This will not allow us take full advantages of the various types of datasets simultaneously, hence, the need to bring in the concepts of ML ensembles that allow different models to be built using different algorithms or different data types or combination of both in unique ways and then those different models can be made to cooperatively and collaboratively unite to achieve the final decision with better performance. Ensemble ML has been established as the best way to take advantage of various AI and ML solutions with several real-life competitions that have demonstrated the superiority of this unique variant of ML paradigm. However, despite the huge potency and power of ensemble techniques in achieving better outcomes always, there is the challenge of running into computational complexity problem as the number of models to be combined increased. The use of superior computing resources to be able to facilitate combining several individual solutions (sometimes hundreds of different solutions needed to be combined) into a single cooperative entity called ensembled solution that will always achieve better outcome is a big buttle-neck.
Therefore, considering the power and advantages presented by HPC, it portends that a careful and systematic coupling of the use of ensemble technique with the superior HPC powers and advantages will go a long way in resolving this computational power challenges of the ensemble ML thereby ushering into the World the unique “HPC-Ensemble ML” as a unique panacea to achieving sustainable and improved medical research and solution deliveries.

Sven Breuner
Field CTO, VAST Data

Talk Title:
Addressing the Exascale storage challenge
Short Description
VAST Data’s managed storage software unlocks the value of data and odernizes datacentres in preparation for the era of AI computing. VAST delivers real-time performance to all data and overcomes the historic cost barriers to building all-flash datacentres. Since its launch in February 2019, VAST has become the fastest-selling infrastructure startup in history. Join Sven Breuner during this session to learn more.

Muneera M. Almuhaidib
Computer Operating System Specialist, Saudi Aramco

Talk Title:
HPC Cybersecurity benchmark
Short Description
This presentation shares the outcomes of a research project on HPC cybersecurity posture that was done recently by Saudi Aramco. The main purpose was to see what other main HPC centers are doing in terms of security. The presentation covers the research problem, objectives, survey, benchmarking, and feasible ways to enhance the ECC HPC security.

Balamurugan Ramassamy
Director HPC APAC & GCC Countries, Altair

Talk Title:
Multi-dimensional HPC: A deep-dive into the Convergence of HPC and AI
Short Description
High performance computing (HPC) and artificial intelligence (AI) are converging. This requires administrators to manage both workloads together in an unsiloed environment. This presentation will illustrate how PBS Professional the industry’s leading job scheduling and workload management solutions together with other HPC Tools can be used as a unique scheduler both for HPC and Kubernètes. We will also explore integration with the most important AI tools.

Walid Shaari
Cloud Architect, Saudi Aramco

Talk Title:
Cloud-native HPC use case
Short Description
This presentation will present the available HPC cloud-native services and how they can be utilized to showcase the smart modern way for running HPC applications securely and in a cost-effective way.
We will introduce the scope and extent of the current state of HPC services in the cloud and how they provide the required building blocks to build the required infrastructure and services for HPC workloads.
Innovating without Infrastructure Constraints, improving security and operational posture and enabling advanced workflows.

Nora Alwadaah
HPC Specialist, Saudi Aramco

Talk Title:
Cloud-native HPC use case
Short Description
This presentation will present the available HPC cloud-native services and how they can be utilized to showcase the smart modern way for running HPC applications securely and in a cost-effective way.
We will introduce the scope and extent of the current state of HPC services in the cloud and how they provide the required building blocks to build the required infrastructure and services for HPC workloads.
Innovating without Infrastructure Constraints, improving security and operational posture and enabling advanced workflows.

Merna Moawad
Software Engineer, BrightSkies Technologies

Talk Title:
Leveraging DAOS Storage System for Seismic Data Storage and Manipulation
Short Description
The DAOS seismic graph is introduced to the seismic community, utilizing the evolving DAOS technology, to solve some of the seismic IO bottlenecks caused by the SEGY data formatthrough leveraging the graph theory in addition to the DAOS object-based storage to design andimplement a new seismic data format natively on top of the DAOS storage model in order to accelerate data access, provide in-storage compute capabilities to process data in place and to get rid of the serial seg-y file constraints. The DAOS seismic graph API is built on top of theDAOS file system(dfs) and seismic data is accessed and manipulated using the DAOS seismic graph API after accessing the root seismic dfs object. The mapping layer is perfectly utilizing the graph theory and the object storage to split the acquisition geometry represented by the tracesheaders away from the time-series data samples.

Obai Alnajjar
Pet Engrg Sys Analyst IV, Saudi Aramco

Talk Title:
Simulation Runtime Optimization via Auto-Tuning of Numerical Tolerances
Short Description
The presentation will give an overview of Saudi Aramco efforts to optimize the runtime of numerical reservoir simulators. These efforts focused on optimization of the reservoir simulation model solver tolerances, global source code optimizations (e.g. complex well modeling, domain decomposition, MPI communication reduction), and HPC environment tuning. The presentation will shed the light on a new innovative approach to determine the optimum numerical solver tolerances by analyzing various parameters (e.g. pressure and saturation changes, material balance errors, etc.). This innovative approach has the potential to speed up the simulation runtime by up to 60%. This will result in improving the simulation runtime and allowing for accommodating more simulation runs to address the business requirements.

Rick Koopman
EMEA Technical Lead HPC and AI, Lenovo

Talk Title:
How to improve Biomedical Analytics while reducing Carbon Footprint
Short Description
Rick will talk about recent Technology developments and work with customers on reducing runtimes and improving quality of results of Biomedical Analytics/Genomics workloads while also dramatically reducing the Carbon footprint of the Technology required to achieve these improvements.

Alanood Alrassan
Petroleum Engineer System Analyst, Saudi Aramco

Talk Title:
Leveraging Artificial Intelligence to Optimize Reservoir Simulation HPC Environment
Short Description
This presentation will give an overview of several AI algorithms that have been developed in-house to optimize the utilization of the reservoir simulation HPC compute resources. This development capitalizes on Deep Learning and Big Data Mining to accurately predict GigaPOWERS jobs’ resources requirements (e.g. cores, memory & runtime). This is accomplished by predicting the optimal number of cores and memory requirements while maintaining an optimized runtime and ensuring maximum scalability. This effort helped to optimize the utilization of compute resources and significantly improve reservoir simulation KPI’s (e.g. Job Wait Time, HPC effectiveness, etc.).

Dr. Nofe Ateq Alganmi
Assistant Professor, King Abdulaziz University

Talk Title:
Increasing Diagnostic rate in Clinical Genomics Variant Interpretation using Aziz Supercomputer
Short Description
With the current knowledge of NGS (Next Generation Sequencing), its medical uses, and the relevant progress in information technology (such as high-performance computing), it is possible to imagine the near-future vision of ubiquitous medical software systems that will not only continuously support the “bench-to-bedside” transition but will also be available in custom toolboxes for all phases of diagnosis and treatment.
In this talk, promising results, and best practice in using King Abdulaziz university supercomputer (AZIZ) to apply genetics medicine in clinics will be presented.

Naya Nagy
Imam Abdulrahman Bin Faisal University

Talk Title:
Coding in the Entanglement Domain
Short Description
Quantum computers have the potential to both affect or intrude into existing systems as well as to build new, more versatile systems. This talk will address a few problems that cover both domains.
The first example comes from the process of bitcoin mining. A quantum computer of a reasonable size is proven to mine for bitcoins with a quadratic speedup, therefore consistently outperforming a strong parallel machine. Practical sizes of today’s quantum computers are not reaching the necessary memory as yet.
The second example, claims that the photon is the ideal physical support to transmit information, as it has the maximum speed of transmission possible. Polarization of a photon is the predominant quantum property used to encode information, but other encoding domains have been considered. In this talk, we put forward the entanglement degree of freedom of a photon as an exploitable resource for encoding information in quantum cryptographic protocols. We show an application of this concept in steganography. A quantum image can hold a hidden message in the entanglement domain while the original image is not changed at all, not even minutely. This is unlike the classical method.
The end of the talk will describe the state of the art on existing quantum computers: size, capacity, and the price of the quantum race.

Dr. Rayan
Faculty Member, King Abdulaziz University

Talk Title:
Aziz HPC Centre
Short Description
This talk will present HPC facilities at Aziz

Badr Badghaish
Geophysicist IV, Saudi Aramco

Talk Title:
Leveraging High Performance Computing for Big Data Processing
Short Description
Datasets such as 3D seismic datasets are typically enormous and are therefore computationally expensive to generate seismic attributes on. They may also contain noise, which can degrade the results of interpretation algorithms and computed seismic attributes. As a result, powerful filtering algorithms such as Non-Local Means (NLM), are required to produce noise-reduced and structurally-preserved results. Such powerful algorithms are computationally intensive for large seismic datasets and would therefore benefit significantly from hardware acceleration.

Zeeshan Kamal Siddiqi
Lenovo

Talk Title:
Leveraging High Performance Computing for Big Data Processing
Short Description
Did you know that Lenovo helps Genomics researchers analyze a whole human genome in 53 minutes, and whole exomes in about a minute? In fact, in standard cloud or on-prem environments the same analysis usually takes 60-150 hrs.! That means Lenovo GOAST, a Genomics Optimization and Scalability Tool, is 167X faster than standard environments. Accelerated execution speeds mean your users get to process more genomes concurrently, find answers faster, and make breakthroughs that save more lives.
GOAST leverages an architecture of carefully selected hardware to accelerate genomics performance. Lenovo uses the open-source tools your scientists know and trust yet tuning them precisely to maximize the use of a CPU-based architecture. This design choice uses standard Off the Shelf (OTS) components–no GPUs or FPGAs of any kind. A CPU-based infrastructure and open-source tools mean costs 50% lower than other solutions requiring GPUs and proprietary software licenses.
What’s more? The Lenovo Genomics R&D group has already done the work for you so your users can focus on their science, and you on supporting them on their goals. And unlike DIY solutions, GOAST gives you access to a turnkey, pre-optimized set-up delivering high-performing results from day one.

Mohamad Jaghoub
Senior Field Application Engineer for AMD META, AMD

Talk Title:
Accelerating your HPC journey
Short Description
AMD technologies has been gaining momentum in HPC space, this session will talk about the technologies that AMD is developing to support accelerating your HPC and AI use cases, this includes 3D-Vcache technology, GPUs and adaptive computing.

Dipl. Eng. THIERS Laurent
DDN

Talk Title:
Accelerating Intelligent Infrastructure for a Changing World
Short Description
Post-pandemic world: Corporate Investments in AI hit a record High, and Artificial Intelligence Dramatically Shifts the Boundaries of Traditional Computing and Analytics. Data has become a necessary strategic tool, which must deliver accurate real time insight into any enterprise’s business, customers and changing market conditions. Existing IT and data storage systems inadequate to handle the very high speed and massive scale requirements of AI and Analytics.
To succeed in today’s AI-driven world, a new high-performance data-centric IT approach is an absolute necessity:
- Data is the Source Code of AI
- Data is Imperative to AI
- Storage is Imperative to AI
Join our Technical Storage presentation and you will have no secrets why “DDN is the de facto name for AI storage in high-performance environments”.

Mustafa Youldash
Assistant Professor, IAU

Talk Title:
Taking Medical AI from Research to Clinical Production with HPC.
Short Description
Healthcare demands new computing paradigms to meet the needs for personalized medicine, next-generation clinics, enhanced quality of care, and breakthroughs in biomedical research to treat disease. Artificial Intelligence (AI) has the ability to revolutionize, and personalize) targeted healthcare for individual patients. The regulatory frameworks for AI in healthcare are a critical component in managing and maximizing accurate healthcare predictions.
The lifecycle of medical AI involves labeling medical imaging data (such as 2- and 3-dimensional scans like X-ray, CT and MRI data), training models, building and optimizing AI applications, and finally deploying and monitoring these applications in clinical production. In this session we will introduce a rich suite of HPC frameworks (and platforms) that can help researchers (and data scientists alike) label data and train top performing models rapidly.