Biotechnology is exploding! In the span of a couple of decades we have gone from individual researchers doing small scale experiments and analyzing them using a spreadsheet to generating massive amounts of data from relevant human samples, e.g. biopsies, using high-throughput technologies and analyzing that data with powerful algorithms.
In our quest to turn that information into knowledge a new bottleneck was created: we need highly collaborative best practices that allow interdisciplinary teams to take a highly engineered approach to the modern bioinformatics team including the roles, process, and automation to deliver high performance research results.
In a recent webinar Code Ocean’s CEO Simon Adar and Yair Benita, Head of Science Operations at CytoReason, discussed how high-throughput collaborations are a game changer in helping computational teams make sense of all the data.
Here are the five key take-aways:
- Teamwork in lieu of individual efforts – The traditional approach of individuals with different skill sets analyzing data doesn’t work anymore. These large and diverse data sets require teams with diverse skills (biologists, computational scientists, clinicians and software engineers) working together.
- Development methodologies are crucial for success – Interdisciplinary teams need to be enabled with the right development methodologies and standardized approaches, e.g. specific roles for optimizing algorithms and bioinformatics pipelines that are consistently applied to similar problems.
- Solve one big problem – Teams need to combine their efforts and focus exclusively on solving one important question at a time based on the available data. While challenging at first this approach allows teams to come up with unique ways to answer these questions, e.g. identify a biomarker or a target for a disease.
- Interactive tools are enabling – They allow computational biologists to share work with biologists and clinicians, who normally don’t write code giving them the opportunity to interact directly with the data and explore follow-on questions.
- Reproducibility, shareability and traceability are key – Throughout the entire computational analysis process reproducibility, shareability and traceability help building trust in the results both with internal and external customers.
“The times where a single computational scientist could solve problems and inform critical drug development decisions are over. Human molecular and clinical data is too large, too complex and too difficult to interpret. A collaborative approach is required. Software engineering went through a similar process: back in the 80s individuals wrote entire operating systems (e.g. MS-DOS). Then software engineers learned quickly how to build teams, work together and adopt methodologies and automation to support teamwork and deliver high-quality results,” said Yair.
The time has come for computational science teams to also adopt high-throughput collaborations.
Simon and Yair share many more insights in our webinar. To listen to it, please click here.
Code Ocean’s platform was built to allow efficient and transparent high-throughput collaboration. If you would like to discuss with one of our team members how the Code Ocean Workbench, Compute Capsules™ and App Panel can improve reproducibility, shareability and traceability of your computational work, please contact us here.