HPC for life sciences: handling the challenges posed by a domain that relies on big data
2020-03-10T03:58:25Z (GMT) by
The advancement of sequencing technologies, proteomics, microscopy (High throughput high content), etc. and decreasing cost is responsible in creating an avalanche of data across multiple sub-domains that fall under life sciences. This data deluge demands an interdisciplinary approach to face the associated challenges such as data storage, parallel and high-performance computing solutions for data analysis, scalability, security and data integration. Ability to deliver solutions to these needs will result in converting highly granular, unstructured data into real scientific insights which will accelerate the advances being made assisted precision medical treatment based on an individual’s genetic makeup, developing drugs with minimum side effects, species conservation programmes, etc.
New Zealand eScience Infrastructure (NeSI) is focused on delivering these tools that are required by our researchers who might need a “huge” amount of memory to assemble a large genome, simulate the Newtonian equations of motion in biochemical molecules like proteins, nucleic acids in parallel, facilitate the ever increasing requirement of data storage (from day to day to “Sensitive”) and deploying efficient methods for end-to-end data transfers. Also, NeSI’s partnership with Genomics Aotearoa had been instrumental in introducing training tools such as virtual machines and an extensive number of workshops hosted on these machine which are proving to assist beginners’ level bioinformaticians/computational biologists to acquire advance skills within a short period to be used in their search to understand the rules of life.
ABOUT THE AUTHOR(S)
Dinindu Senanayake is an Applications Support Specialist at NeSI with a particular interest in Bioinformatics and Computational Biology. Joined NeSI following half a decade of research experience gained in the field of Cancer Genetics, Chemical Genetics and Bioinformatics