Generative AI has established a leadership position in the technology zeitgeist of 2023 and represents a significant evolution in the field of artificial intelligence (AI) and machine learning (ML). While AI/ML technologies have been advancing steadily for decades, chat-based interfaces democratized access to AI. By bringing AI out of the exclusive realm of specialists and into the hands of the general public, these chat-based interfaces have played a key role in the exponential rise of interest in generative AI.

At Scitara, our primary mission is to facilitate laboratory connectivity and workflow automation through our vendor-agnostic iPaaS application for science, Scitara DLX™. As a part of this commitment, we are exploring the potential of generative AI. While there are applications of AI/ML in data analysis, we see our role as enabling and empowering our partners in this domain. By focusing on AI/ML solutions that enhance data mobility and accessibility, we strive to provide a robust platform upon which our partners and customers can build and apply their data analysis expertise. As we examine generative AI in this blog, we’ll explore challenges we’ve encountered, as well as the potential of leveraging such technology for better data mobility.

Navigating Challenges in Generative AI

In our collective enthusiasm towards the transformative abilities of generative AI, it is important to acknowledge that prevailing implementations aren’t without their challenges when applied to mission-critical tasks. These challenges encompass a spectrum of issues, including:

Privacy and security concerns

Large language models (LLMs) often rely on public APIs, and concerns grow regarding the potential misuse or exposure of users’ queries containing sensitive or proprietary information through these API providers. Safeguarding user data from nefarious intention or data breaches is of utmost importance.

Hallucinations and inaccuracies

The phenomenon of “hallucinations” arises when LLMs produce seemingly convincing yet ultimately incorrect responses to questions outside their rigorous training. This poses a significant challenge, as relying on such responses can lead to inaccurate outcomes, particularly in critical applications.

Real-world connectivity

LLMs operate within the confines of their training data, making them time-bound and static in their knowledge. This limitation restricts their utility in addressing real-time queries or applications that require up-to-date information. Moreover, their inability to handle temporal or real-world queries curtails their effectiveness in scenarios based on the current context.

Enhancing Scitara DLX with Generative AI: Transforming Orchestrations and Building Intuitive Interfaces

Here at Scitara, we have been exploring ways to leverage generative AI and provide Scitara DLX users with an even more powerful user experience. Here are a few ways we see generative AI being integrated into Scitara DLX, providing users with even more resources to further simplify their lab automation processes:

No-code UI for data transformation and user input

For those familiar with Scitara, two powerful features in Scitara DLX Orchestrations are function transform and user input steps. A function transform steps allows you enter a code snippet (JavaScript or Python) to modify or transform execution data. A user input allows you build user interfaces that require input from users.

Leveraging generative AI, users will be able to easily modify or transform execution data with minimal coding knowledge when utilizing Scitara’s powerful Orchestration tools. Using generative AI technology, with the click of a button, you will be able to transfer code from a chat session into the UI and be well on your way to performing what were previously complex data transformation processes.

In the following example, the user requested a CSV file parsing function. The AI tool recognized the user was working in JavaScript and generated the requested function.

When creating an interface for user input, users will be able to tell the generative AI assistance tool what the desired interface should look like, and the assistance tool will easily generate the appropriate schemas. The user can continue ‘chatting’ with the assistance tool to make any further adjustments to the user input interface.

Orchestration building made easy

We are also exploring the possibilities of integrating a chat-based interface to assist with Orchestration development. The tool would assist in making some initial decisions on appropriate data mapping or data transformation. For example, the interface could enable the following types of interactions:

  • Create an Orchestration triggered by an ELN event, and then create a sample set in a CDS (where ‘ELN’ and ‘CDS’ would be replaced by specific Scitara Connection names)
  • “Insert a JavaScript function transform step to transform the samples array in the trigger payload into the format required by a CDS sample set action” (where ‘CDS’ would be replaced by specific Scitara Connection names)
  • “Map the array output from step 1 to the samples List required in the options of step 2”

We believe this chat-based alternative will offer a more intuitive approach to Orchestration building, empowering users to easily create complex automations in a few simple steps.

How Generative AI Drives Breakthroughs in Science

In our study of generative AI, we see numerous benefits for science-based industries. While challenges exist, the technology offers advantages that can transform innovation boundaries. These advantages include:

Innovative idea generation

Generative AI amplifies idea generation by analyzing vast datasets and complex patterns, producing innovative concepts that stimulate breakthroughs and new directions for research. The amalgamation of data-driven insights and the imagination driven by AI sparks creative thinking, propelling innovation both in and out of the confines of the laboratory setting.

Data synthesis and augmentation

Using chat-based LLMs allows you to refine datasets by asking interactive and intuitive questions, accelerating the data transformation process. This not only expedites research endeavors but may uncover latent relationships, empowering analysts with more comprehensive insights for informed decision-making.

Conclusion

Our goal is to fully explore and realize the synergy between DLX’s connectivity and LLM’s contextual awareness to create an interface that enhances productivity and efficiency alongside human operators. This potential opens new possibilities within the laboratory, as human intelligence collaborates with AI augmentation, ultimately reshaping the landscape of connectivity and automation for the future. In an upcoming blog post, we will delve further into our exploration of the possibilities we envision for DLX and generative AI in the context of a ‘digital lab assistant.’

Click here to share your ideas about how you’d like to see generative AI integrated into your lab of the future, we would love to get your input!