<?xml version="1.0" encoding="utf-8"?>
  <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
      <title>BIOVIA</title>
      <link>https://blog.3ds.com/brands/biovia/feed.xml</link>
      <description>BIOVIA</description>
      <lastBuildDate>Thu, 09 Apr 2026 20:23:53 GMT</lastBuildDate>
      <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
      <generator>3DExperience Works</generator>
      <atom:link href="https://blog.3ds.com/brands/biovia/feed.xml" rel="self" type="application/rss+xml"/>

      <item>
      <title>
      <![CDATA[ Making Generative Small Molecule Design Better ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/making-generative-small-molecule-design-better/</link>
      <guid>https://blog.3ds.com/guid/300677</guid>
      <pubDate>Mon, 06 Apr 2026 12:46:26 GMT</pubDate>
      <description>
      <![CDATA[ BIOVIA Generative Therapeutics Design integrates NVIDIA MolMIM NIM, combining generative AI with classical simulation to design better drugs faster, in a unified workflow. 
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
Over the past several years, generative artificial intelligence (AI) has attracted significant attention in drug discovery, often accompanied by claims that AI alone will revolutionize therapeutic design. The ability of generative models to rapidly propose novel molecular structures, explore vast chemical spaces, and optimize compounds across multiple properties simultaneously represents a genuine technological leap. AI, as a technological disruptor in drug discovery, has promised to accelerate molecular design through unprecedented scale, automation and data-driven creativity. However, as excitement around AI continues to grow, an important question remains: Is generative AI sufficient on its own to reliably produce successful drug candidates?



Experience suggests that transformative innovation rarely comes from a single technology operating in isolation. Instead, the enduring advances emerge when complementary approaches combine to overcome each other’s limitations. In therapeutic discovery, physics-based simulation and classical computational chemistry remain essential for understanding molecular behaviour, predicting binding energetics, and ensuring biological and chemical feasibility. Generative AI introduces powerful exploratory capabilities, but without integration into scientifically grounded modelling and simulation frameworks, AI-generated molecules can struggle to reliably translate into viable therapeutic candidates. The future of small-molecule design is therefore unlikely to be defined by AI replacing classical methods, but rather by integrating both into unified discovery workflows. That is what we have observed with BIOVIA Generative Therapeutics Design (GTD), when combined with the generative NVIDIA MoIMIM algorithm.



Better Together: BIOVIA GTD Seamlessly Integrates NVIDIA MolMIM



BIOVIA Generative Therapeutics Design is an integrated scientific software solution on the 3DEXPERIENCE platform that accelerates the discovery and optimization of small-molecule therapeutics by combining generative AI with established computational chemistry and data management capabilities. The solution provides scientists with a unified environment in which they can define and model biological targets, anti-targets (off-targets and liabilities), and ADMET properties, and combine design objectives such as potency, selectivity, and drug-like properties into a Target Product Profile (TPP). The algorithms automatically generate candidate molecules that are then iteratively optimized to identify those that best satisfy these multiple criteria.



Since its inception, GTD has employed various methods to generate new, related molecules from the best-so-far input molecules. The methods are based on mechanistically correct chemical transformations, group replacements, atom- or bond-replacements, matched molecular pair substitutions and group or reaction enumeration. These methods yield valid structures in every case because they are fundamentally based on the rules of organic chemistry.



NVIDIA MolMIM is a generative model that proposes new small-molecule structures by learning patterns from large collections of known compounds. At a high level, it works by translating molecules into an internal “latent space” numerical representation that implicitly captures their underlying chemical features. Molecules that are chemically similar tend to cluster near one another in this space, enabling smooth exploration of chemical space without performing discrete operations on atoms and bonds.



Once this chemical space has been learned, MolMIM can generate new molecules by starting from an existing compound and making controlled changes, or by making random perturbations to explore nearby regions of chemical space to generate related but novel structures. These generated molecules can then be ranked using external criteria — for example, predicted properties, docking scores or other structure-based evaluations — and the model can iteratively propose new candidates based on the top-ranked structures. In other words, MolMIM is a perfect fit for the GTD workflow of iterative optimization via molecular structure refinement.



The Sum of the Parts: Combining AI with Classical Simulation



Together, MolMIM and the “classic” GTD chemical transformations work well to propose chemically reasonable variations and new scaffolds, while relying on downstream physics-based modelling and domain expertise to determine which designs are most likely to succeed experimentally. When optimizing against a TPP, we have found that MolMIM and chemical transformations work better together than either approach does alone. Evidently, the size and directions of the paths they make through chemical space are in some way complementary.



In this emerging paradigm, innovation is no longer driven by AI or classical simulation alone, but by their convergence into a unified scientific experience. In the future of drug discovery, the greatest breakthroughs occur when the whole is greater than the sum of its parts.



Watch the video to find out how you have seamless access to NVIDIA MolMIM through BIOVIA GTD on the 3DEXPERIENCE platform:




 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ Reshaping Drug Discovery and Materials Innovation ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/reshaping-drug-discovery-and-materials-innovation/</link>
      <guid>https://blog.3ds.com/guid/300327</guid>
      <pubDate>Mon, 23 Mar 2026 19:00:00 GMT</pubDate>
      <description>
      <![CDATA[ How the Dassault Systems—NVIDIA collaboration is accelerating drug and materials discovery
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
Design the Impossible



Artificial intelligence (AI) is reshaping how organizations approach drug discovery and materials innovation. Dassault Systèmes&#8217; expanded partnership with NVIDIA represents a significant advancement in these areas, combining NVIDIA-accelerated AI with BIOVIA&#8217;s science-based Virtual Twins to help researchers discover novel drug candidates and chemicals with unprecedented speed and accuracy.



This collaboration builds on the broader partnership between Dassault Systèmes and NVIDIA, extending advanced AI capabilities directly into BIOVIA&#8217;s scientific solutions on the 3DEXPERIENCE platform. The result is an immersive scientific environment where computational power meets scientific rigor, enabling researchers to explore possibilities that were previously out of reach.



How NVIDIA&#8217;s Generative AI Enhances BIOVIA&#8217;s Scientific Experiences



The partnership integrates NVIDIA’s AI infrastructure, accelerated computing and open generative models with BIOVIA&#8217;s&nbsp;Virtual Twin&nbsp;technology, creating a unified industrial AI architecture that boosts capability, speed, and accessibility across drug discovery and materials science. This approach creates what both companies call &#8220;trustworthy Industrial AI.&#8221; These systems not only generate possibilities but also produce scientifically valid candidates that are more likely to succeed in real-world applications.



In drug discovery, NVIDIA&#8217;s BioNeMo models are seamlessly integrated into BIOVIA&#8217;s drug discovery solutions on the 3DEXPERIENCE platform, working alongside BIOVIA&#8217;s established generative AI and physics-based modeling tools to generate and optimize novel molecular candidates at scale, significantly expanding the chemical and biological space while maintaining physics-based validation.



For materials science applications, the partnership enables atomistic simulation with machine-learned interatomic potentials (MLIPs) and GPU-accelerated molecular dynamics through&nbsp;NVIDIA BatchedMD, combined with&nbsp;BIOVIA Materials Studio. This combination helps researchers simulate complex, heterogeneous materials systems with quantum-level accuracy while maintaining production-scale speed—a balance that classical force fields struggle to achieve for mixed organic-inorganic and reactive chemistries.







Expanding BIOVIA&#8217;s Virtual Twin Offerings



Dassault Systèmes’ expanded partnership with NVIDIA extends BIOVIA&#8217;s&nbsp;3D UNIV+RSES, the collaborative environment where scientists work with AI-powered Virtual Twins.&nbsp;



Virtual Twins are dynamic, science-based digital representations grounded in the fundamental laws of physics, chemistry and biology that allow researchers to simulate complex molecular and biological systems, explore vast design spaces, and predict behavior with high fidelity, minimizing the need for physical experimentation.&nbsp;



NVIDIA accelerates these Virtual Twins in three critical ways:




Scale: GPU acceleration enables larger systems and broader design exploration.



Speed: High-performance computing dramatically reduces simulation time.



Intelligence: Generative AI proposes novel candidates that Virtual Twins can immediately evaluate and refine.




The integration of&nbsp;NVIDIA Inference Microservices (NIM)&nbsp;into the&nbsp;3DEXPERIENCE platform democratizes access to state-of-the-art science. Chemists and biologists can now tap into advanced AI capabilities through intuitive interfaces, removing the technical barriers that once limited who could use these tools.



From Discovery to Market: Shortening Development Cycles



The impact extends across the entire innovation pipeline.&nbsp;



In therapeutics, the platform accelerates every stage from ideation to therapy by unifying multimodal data to predict safety, efficacy and manufacturability earlier in the process. This early insight reduces costs, shortens timelines and improves success rates, helping biopharma companies bring breakthrough therapies to patients faster.



For materials innovation, the collaboration transforms simulation into a data engine for innovation. By generating large simulation datasets, researchers can support surrogate modeling and AI-driven property prediction for batteries, photoelectric materials, catalysts, polymers and sustainable products. The ability to explore materials design spaces at scale enables efficient screening and optimization across diverse application areas, including energy, automotive and pharmaceutical R&amp;D.



Enterprise-Ready Industrial AI



Enterprise adoption requires more than computational power. The Dassault Systèmes-NVIDIA partnership addresses critical concerns about data security, intellectual property protection and sovereign deployment requirements through enterprise-grade infrastructure and secure cloud options.&nbsp;



The partnership also emphasizes making advanced AI accessible to every scientist, not just computational specialists. BIOVIA&#8217;s Virtual Advisors augment workflows and enhance early-stage discovery across all scientific disciplines, allowing domain experts to focus on scientific questions rather than technical implementation.



Driving the Next Era of Scientific Innovation



The collaboration between Dassault Systèmes and NVIDIA reflects a fundamental shift in R&amp;D. By merging BIOVIA&#8217;s chemistry and materials science expertise with NVIDIA&#8217;s accelerated computing and AI capabilities, the partnership transforms Virtual Twins into scalable, AI-augmented discovery engines.



Organizations seeking life-saving therapies or sustainable new materials now have a digital foundation that helps them see what was once invisible and design what seemed impossible. Whether you&#8217;re developing breakthrough therapeutics or engineering high-performance materials, the Dassault Systèmes-NVIDIA partnership provides the tools to transform how your organization innovates. Thousands of scientists and engineers are already using these capabilities to drive discovery forward. The question is no longer whether AI will reshape scientific R&amp;D, but how quickly your organization can harness its potential.




Learn More about BIOVIA &amp; NVIDIA





 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ From Molecular Graphs to Force Fields with AI ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/from-molecular-graphs-to-force-fields-with-ai/</link>
      <guid>https://blog.3ds.com/guid/300473</guid>
      <pubDate>Wed, 04 Mar 2026 18:38:58 GMT</pubDate>
      <description>
      <![CDATA[ BIOVIA’s Continued Collaboration with UC Berkeley’s MSSE Program
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
BIOVIA is proud to continue the collaboration with UC Berkeley&#8217;s Master of Molecular Science and Software Engineering (MSSE) program, now in its third year of sponsoring capstone projects that bring together industry expertise and emerging computational science talent. The first capstone in Spring 2024 produced CalMPNN, a Python module for protein sequence design and stability prediction. For Spring 2025, a new team tackled a foundational challenge in computational chemistry: using machine learning to automate atom type assignment in molecular force fields.



The Challenge



Before a molecular dynamics simulation can model how a drug interacts with its target, every atom in the molecule must be classified into a specific &#8220;atom type&#8221; that determines the physical parameters (bond lengths, angles, charges, etc.) governing how the molecule behaves. Traditionally, this classification relies on rule-based tools that can be brittle, slow on large datasets, and difficult to extend. The Spring 2025 capstone team set out to replace that approach with machine learning.



What the Team Built



The result is atoMLtype, a modular Python toolkit that learns the mapping from molecular structure to atom type in the widely used GAFF2 (General AMBER Force Field 2) framework. The team implemented and compared multiple modeling approaches, from a Random Forest baseline using hand-crafted atomic features to several Graph Neural Networks that operate directly on the molecular graph. They also developed thoughtful solutions for handling chemical symmetries in the force field&#8217;s type system, demonstrating real depth of understanding of both the underlying chemistry and the modeling challenges.



The toolkit provides a complete workflow: loading molecular datasets, training and evaluating models, and producing detailed analysis; all aimed at making atom typing faster, more scalable, and more accurate for computational chemistry pipelines.



Fresh Perspectives, Real Impact



As with the first capstone, my colleague Reed Harrison and I served as project mentors, guiding the team through the intersection of software engineering best practices and scientific rigor. These collaborations continue to demonstrate the value of pairing BIOVIA&#8217;s domain expertise with the energy and fresh thinking of Berkeley&#8217;s MSSE students..



Brandon Robello, one of the students on the team, reflected on the experience: 




This project was an incredible opportunity to apply modern machine learning techniques to a real-world scientific challenge. Working with Dr. Rohith Mohan and Dr. Reed Harrison on atom-type prediction from molecular graphs sharpened my skills in model development and software engineering, while deepening my appreciation for the complexity of chemical representation.








Jeremy Millford, another team member, shared a similar enthusiasm:




Exploring the project, learning new tools, and creating an impactful deliverable was a wonderful opportunity to let our scientific curiosity and new skills run wild. The balance between the freedom to explore and the structure of a professional assignment made the experience both highly valuable and genuinely fun. I found myself proactively wrapping up coursework, so I could work on it just a bit more. I also feel like I learned a ton.




Looking Ahead



Our partnership with UC Berkeley&#8217;s MSSE program continues to be a rewarding experience for both sides; providing students with real-world challenges at the forefront of computational science while bringing innovative approaches into BIOVIA&#8217;s work in drug discovery. We look forward to continuing this collaboration and seeing what the next cohort of talented students will build.







📩Want to find out the latest news about BIOVIA events, customer stories, blogs and more? Join the BIOVIA newsletter today!




 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ How to Build an AI-Ready Lab ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/how-to-build-an-ai-ready-lab/</link>
      <guid>https://blog.3ds.com/guid/298616</guid>
      <pubDate>Fri, 06 Feb 2026 15:09:28 GMT</pubDate>
      <description>
      <![CDATA[ AI transforms R&D only when data is connected, consistent, and scientifically meaningful.
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
How to Build an AI-Ready Lab



Every organization claims to be leveraging AI to transform R&amp;D, yet few labs are truly prepared. The challenge isn’t a lack of instruments or software, but rather data that is fragmented, inconsistent, and disconnected from the scientific context needed for machine learning, prediction, or generative design. Building an AI-Ready lab is not about installing new algorithms—it’s about building a data foundation that unifies experiments, connects systems, captures metadata, preserves lineage, and creates a continuous flow between real-world science and digital intelligence. This blog explores how to build that foundation from the ground up: establishing the architecture, creating the data lake, developing the knowledge layer, implementing governance, and achieving seamless integration between physical and virtual experiments that ultimately allows AI to accelerate scientific breakthroughs.



Why AI Matters in Modern R&amp;D



For science-driven industries—pharma, biotech, chemicals, materials, and CPG—AI is reshaping how discovery happens. Traditional R&amp;D has been built on slow, sequential, trial-and-error experimentation. AI shifts this toward a fast, predictive, data-driven model. It cuts down the time from idea to optimized candidate, reduces dependence on costly physical experiments, and automates large portions of data processing so scientists can focus on thinking rather than searching, formatting, or cleaning. Modern labs generate massive volumes of complex data; AI is the only technology capable of extracting meaningful patterns from this scale of information. By connecting data across ELNs, LIMS, instruments, modeling systems, and literature, AI uncovers relationships humans would overlook. It adds context—linking results to specific runs, instruments, conditions, and users—making insights more traceable, reliable, and actionable. AI accelerates innovation not by replacing scientists, but by amplifying their ability to explore more ideas, make better decisions, and reach breakthroughs faster.



Today you walk into almost any modern lab, and you’ll see world-class scientists surrounded by cutting-edge instruments. LC/MS systems hum in the background, automation platforms move with precision, and digital notebooks have replaced stacks of handwritten pages.



And yet—despite all the technology—most labs are nowhere near ready for AI.



Why?Because the single most important ingredient for AI isn’t the algorithm.It’s the data.



Not just having data.Not just digitizing data.But building a data foundation that is complete, connected, contextualized, compliant, and ready to fuel machine learning, predictive models, and generative systems.



1. The Myth: AI Starts with Models.



The Reality: AI Starts with the Lab.



Executives often say they want “AI to improve R&amp;D productivity.”But AI cannot accelerate decision-making if the data behind those decisions is incomplete or trapped in silos.



Today, scientific data is scattered across:




ELNs that store experimental descriptions



LIMS systems that track samples and QC data



Instruments producing spectroscopy files, images, and chromatograms



Spreadsheets living on shared or local drives



PDFs, reports, and PowerPoints documenting meetings



Databases holding structured records



Modeling systems generating simulations




Each of these systems is valuable.None of them—on their own—makes a lab ready for AI.



AI thrives on connection, and most labs are still built on islands.



Building an AI-Ready lab means rethinking your lab not as a set of tools, but as a data ecosystem.



2. The Foundation: A Unified Data Ecosystem



Getting AI-Ready starts with deciding where your data lives, how it flows, and how it connects. Successful AI transformations share a common pattern: they build a data architecture with three interconnected layers—operational databases, a scientific data lake, and a knowledge layer that provides meaning and context.



Databases: The Operational Nervous System



Databases sit under core systems like traditional ELN, LIMS, inventory management, or sample tracking. They store structured, regulated records with strict schemas and high reliability. These databases keep the lab running. They support compliance, traceability, and controlled vocabularies.



But most of the tools only store structured data—and only the subset that fits neatly into tables.



For AI, this is necessary but nowhere near sufficient.



The Data Lake: The Scientific Memory of the Lab



If a database is the lab’s nervous system, the data lake is its long-term memory.



A modern laboratory produces enormous amounts of unstructured scientific content:




Raw instrument data



High-resolution assay images



NMR spectra and chromatograms



ELN attachments



Simulation outputs



PDFs and presentations



Sensor logs



Robotic workflow files




None of these fits well in a traditional database. All of it is essential for powerful AI.



A scientific data lake accepts data in any format—structured, semi-structured, or unstructured—and stores it as-is. When AI or analytics tools need it, the structure is applied on demand. This flexibility is what makes the data lake the heart of an AI-Ready lab.



The key is to ensure all data—experimental, analytical, simulation, formulation, recipe, process—flows into this environment with full metadata captured.



The Knowledge Layer: Turning Data into Insights



Data alone cannot power AI. AI needs context.



A knowledge layer provides that context by enforcing consistent vocabularies, capturing rich metadata, and preserving data lineage so that every experiment, batch, formulation, analytical result, and scientific conclusion is connected. This is what turns isolated files into connected science. When relationships between data points are explicit, AI systems can interpret how inputs drive outcomes, learn more efficiently, and generate better predictions with fewer experiments.



A common way to build this semantic foundation is using RDF — the Resource Description Framework — which structures information as a web of linked relationships. In this model, the knowledge layer becomes not just a place where data is stored, but a system that truly understands how the pieces fit together. This is the moment when AI shifts from processing data to accelerating discovery.



Learn how BIOVIA ONE Lab connects all on one platform.



3. Creating Flow: Connecting Instruments, Systems, and the Data Platform



An AI-Ready lab cannot tolerate manual uploading, naming files inconsistently, or storing critical assay results in somebody’s folder called “Final_v3_EDITED_2.xlsx.”



Data must move automatically from:



Instruments → Lab Systems → Data Lake → Knowledge Layer → AI Models



This requires:




Instrument connectivity



API-driven system integrations



Workflow orchestration



Automated metadata capture



Templates that enforce scientific consistency




When every experiment is automatically captured, tagged, stored, and contextualized, the lab becomes a continuous source of machine-readable knowledge.



That is the moment AI becomes not just possible—but powerful.



4. Preparing Data for AI: Cleaning, Curation, and Connection



Before data flows into the lake, it should be automatically prepared for use in AI models. Key tasks include:




Standardizing units and formats



Aligning naming conventions



Removing redundancy



Linking data across systems



Annotating with metadata



Capturing lineage and uncertainty



Scoring data quality



Creating curated training sets




These steps turn raw science into computable science, ready for machine learning, predictive modeling, and generative design. This is where the lab becomes a true partner to AI.



5. Data Governance: The Quiet Hero of AI Success



Every company wants AI.Very few want the discipline required to make AI successful.



Data governance is not glamorous.But it is the difference between:




An AI system that reinforces noise



And an AI system that accelerates discovery




Governance defines:




How experiments are documented



What metadata must be captured



How results are named and structured



Who owns and stewards each data set



How versions and audit trails are handled



How quality is measured and monitored



How compliance is enforced




Without governance, a data lake becomes a data swamp.With governance, it becomes a scientific engine.



6. Integrate Real and Virtual Experiments



AI-Ready labs unify physical and virtual experimentation into a single, continuous scientific process. What happens on the bench is instantly connected to what happens in silico — from molecular simulation and materials modeling to predictive formulation, virtual twins, and generative AI that proposes new hypotheses. This fusion is now essential across chemicals, materials, life sciences, and consumer products, enabling teams to explore more possibilities, make faster decisions, and arrive at breakthrough innovations with greater confidence.



The AI-Ready lab becomes a feedback loop:




AI designs or predicts candidates



Lab executes and generates real-world results



Results flow back into the AI models



Models become smarter, workflows accelerate








This loop only works when data flows seamlessly across systems.



7. Build the AI Layer: Models, Analytics, and Scientific Learning Loops



After the data foundation is built, AI can start delivering real value. While specific applications vary by industry, many organizations see similar patterns in how AI enhances scientific work.



AI use cases differ by industry, but common ones include:



Chemicals &amp; Materials




Predictive materials design



Simulation-augmented lab testing



Property prediction



Generative design of polymers, catalysts, coatings




CPG &amp; Formulation




Predictive formulation optimization



Sensory and texture modeling



Ingredient substitution



Sustainability-driven formulation redesign




Pharma &amp; Biotech




Assay optimization



Biologics design



Analytical method development



Reaction prediction




AI becomes a natural extension of scientific workflows—not an afterthought.



8. Toward the AI-Driven Lab: Closing the Loop



When the foundation is in place, the lab evolves rapidly:



Experiments feed models.Models propose new experiments.Robotics executes them.Data flows back automatically.The model improves.The cycle continues.



This self-improving loop—where the Virtual + Real worlds reinforce each other—is the future of scientific R&amp;D.



It is the destination of an AI-Ready lab.And it is only possible because the data foundation is strong.



The Takeaway: AI in the Lab Begins with Data



AI is not something you add at the end of digital transformation.It is something you start building from the beginning.



An AI-Ready lab is built on:




A modern data architecture



Seamless data flow



Strong governance



High-quality digital systems



A unified scientific data model



A culture of data discipline




When you get the data right, AI becomes a natural extension of the lab—an intelligent layer woven into every experiment, every decision, and every discovery. This approach is how leading companies transform their labs, make AI real and reliable and build the future of R&amp;D.



Discover how BIOVIA helps organizations transform their labs into: 




💡AI-Ready environments








📩Want to find out the latest news about BIOVIA events, customer stories, blogs and more?&nbsp;Join the BIOVIA newsletter today!




 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ Automating CMC Dossiers With AI ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/automating-cmc-dossiers-with-ai/</link>
      <guid>https://blog.3ds.com/guid/299599</guid>
      <pubDate>Fri, 23 Jan 2026 13:52:30 GMT</pubDate>
      <description>
      <![CDATA[ For a New Era of Regulatory Compliance
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
The regulatory landscape for Chemistry, Manufacturing, and Controls (CMC) dossier submissions is shifting decisively toward structured, data-driven submissions. As global standards like KASA, IDMP, and FHIR become the norm, traditional document-driven processes are proving inadequate. Regulatory bodies, including the FDA, are shifting toward structured data submissions to enhance consistency, transparency, and validation. This evolution demands a more dynamic, automated, and collaborative approach to dossier creation.



This shift from static documents to dynamic, data-driven workflows is a strategic necessity. By embracing automation and a data-first mindset, regulatory teams can achieve near real-time submissions, ensure end-to-end digital continuity, and maintain unparalleled consistency across all documentation. This post explores how a data-centric approach, powered by Deterministic and Generative AI can streamline dossier creation and set a new standard for regulatory excellence.



The Technology Backbone of Regulatory Transformation



Several interconnected technologies form the foundation of an automated CMC dossier solution. When integrated on a single platform they create a powerful ecosystem that turns raw data into submission-ready content with integrity and speed.



Ontology Management and Semantic Graphs



At the core of this transformation is Ontology Management. An ontology is a formal regulatory data model that defines how CMC data elements relate and validate each other. For CMC, this means creating a structured framework that aligns with regulatory requirements. It establishes a common language for all data, from lab results to manufacturing specifications. This semantic backbone ensures consistency and improves data governance, with the &#8220;Data Steward&#8221; role becoming accountable for maintaining regulatory consistency across the entire data lifecycle.



Building on this, Semantic Knowledge Graphs connect diverse datasets in a deeply contextual way. Instead of storing data in isolated silos, a knowledge graph creates a web of interconnected information. This allows for more sophisticated queries and provides a holistic view of the data, revealing insights that would be difficult to find with traditional methods.



Structured Authoring and Deterministic AI



Structured authoring serves as the foundation for transforming knowledge graph data into well-defined, pre-templated tables, charts, and other essential data elements. Because the knowledge graph is continuously refreshed in near real time, the content within the structured authoring environment can likewise be updated until a specific section, paragraph, or chapter is formally frozen and locked from further edits. Structured authoring effectively provides the front-end interface through which authors can observe, curate, and narrate the text associated with these tables and charts, all of which is driven by Deterministic AI. This ensures there is no distortion in the data pathway from the source to the structured authoring environment—the data presented as tables and charts remains fully traceable and reliable.



Once the appropriate tables and charts are created, you can, if you wish, use a Large Language Model (LLM) to generate the corresponding narrative. This must be a collaborative effort between the LLM and the author, who remains responsible for continually assessing, curating, and verifying any content generated by Generative AI. Of course, LLMs should not be trusted without human oversight in regulated CMC contexts. But the combination of a Deterministic data flow to create the tables and an LLM to generate the narrative gives us powerful accelerators toward automating CMC end to end.



From Data Source to CMC



The Shift to FHIR



The healthcare and life sciences industry is rapidly embracing the Fast Healthcare Interoperability Resources (FHIR) standard for data exchange. Regulatory bodies such as the EMA and FDA are increasingly mandating or encouraging its use for dossier submissions, and legacy formats like PDF and SPL are becoming obsolete. By integrating deterministic mapping from internal data ontologies to FHIR standards, organizations ensure their regulatory operations are not only compliant with current mandates but also ready for future requirements such as FHIR R6. Automated platforms capable of generating FHIR-native bundles directly from live source data reduces rework as regulatory data standards evolve, eliminating manual conversion tasks and future-proofing the dossier submission process.



Tangible Value of a Data-Driven Approach



Adopting an automated, data-centric model for CMC dossier creation with a unified platform delivers measurable value across the organization. Industry benchmarks consistently show cost savings, reduction in the time needed to create stability and batch analysis sections, and decrease in overall CMC content authoring time. These efficiencies accelerate the entire submission timeline, allowing organizations to bring products to market faster.



Reduce Time and Costs



Automating manual processes significantly reduces non-value-added tasks. Teams can eliminate repetitive data verification, reduce review cycles, and shorten response times to health authorities. This leads to right-first-time submissions and fewer post-submission issues, accelerating the path to market.



Enhance Quality and Compliance



Automation minimizes errors caused by manual data entry and transfer. It ensures the consistency and completeness of data and documents while providing end-to-end traceability. This &#8220;data integrity by design&#8221; approach helps organizations keep up with evolving agency requirements, including new data formats like FHIR.



Improve Collaboration



A unified platform removes the complexity of working across multiple, disconnected systems. Teams can collaborate simultaneously on submissions, share knowledge through structured content, and formalize internal best practices. This streamlined environment also makes it easier to integrate new partners and suppliers into the workflow, fostering greater efficiency.



The 3DEXPERIENCE Platform: A Unified Solution



The 3DEXPERIENCE platform is an integrated, end-to-end environment for data-driven CMC dossier creation. It is a semantically aware environment where ontologies inform the data model, delivering a comprehensive automation solution. By moving from a document-centric to a data-driven model, organizations can automatically generate new CMC documents from a wide variety of sources.



The platform&#8217;s data science applications allow teams to align, aggregate, and add semantics to incoming data. This enables the automated creation of data tables through queries or events. Data from sources like stability, specification, and batch analysis can be harmonized into a structured, well-informed view aligned with the regulatory model.



Key capabilities include:




Parameterized templates for automated content creation



Collaborative real-time authoring environment for technical documentation



Live data links from the semantic graph index



Full control of published output and content reuse at section, table, and image levels



Lifecycle management of individual content sections



Graphical revision history for complete traceability of all changes




This collaborative environment allows multiple authors to work simultaneously while maintaining control over content maturity. The system combines live data links with human narrative, generating submission content with exceptional automation and integrity.



The Future of Regulatory Submissions



The automation of CMC dossier creation is no longer a distant vision; it is a present-day reality with proven benefits. By consolidating data, leveraging AI, and adopting a structured, platform-based approach, life sciences companies can achieve significant gains in efficiency, quality, and compliance. The regulatory landscape will continue to evolve, and the technologies to meet these new demands are mature and available. By taking the first step toward a data-centric model, your organization can begin the journey toward a more agile, compliant, and innovative regulatory future.



Explore how end-to-end automation can remove manual CMC rework while strengthening compliance in the CMC dossier creation process.







📩Want to find out the latest news about BIOVIA events, customer stories, blogs and more?&nbsp;Join the newsletter today!
 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ Can You Trust AI in Scientific R&amp;D? ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/can-you-trust-ai-in-scientific-rd/</link>
      <guid>https://blog.3ds.com/guid/299123</guid>
      <pubDate>Tue, 23 Dec 2025 16:43:09 GMT</pubDate>
      <description>
      <![CDATA[ Mitigate the risk of AI errors and chart a path forward with new initiatives.
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
Why Trusting AI in Scientific R&amp;D is Hard (and How to Fix It)



The promise of AI in scientific R&amp;D is immense, but so is the risk. In scientific innovation, an error by AI could lead to a failed clinical trial, a toxic battery, or a shelf-unstable beverage, costing millions and setting back progress for years. For this reason, establishing trust in AI is not just a preference; it is a prerequisite for adoption.



However, the core methodologies of AI and traditional science are often in conflict. This post will explore the fundamental challenges that create a trust gap in scientific R&amp;D and present a framework for bridging it. Understanding these hurdles is the first step toward using AI to accelerate innovation and secure a competitive advantage.



The Core Conflicts of AI in a Scientific Context



Trust in AI cannot be achieved without first acknowledging the inherent tensions between how AI models operate and the rigours of scientific discovery. These challenges span data integrity, model transparency, and regulatory compliance.



The &#8220;Black Box&#8221; vs. The Scientific Method



A primary conflict lies in the concept of explainability. Scientists are trained to understand the mechanism of action—the causal link between A and B. Many advanced AI models, particularly deep learning systems, function as &#8220;black boxes,&#8221; providing predictions without a clear, derivable rationale.




Life Sciences: An AI might predict a molecule will be effective against a specific biological target. However, without explaining why—for instance, detailing how it binds to a specific protein pocket—a medicinal chemist cannot confidently advance the candidate to expensive and time-consuming wet-lab validation.



Battery &amp; Materials R&amp;D: In the search for new cathode materials, an AI could predict a novel composition that will yield high energy density. If it cannot explain the underlying electrochemical principles ensuring its stability, researchers risk developing a battery that is chemically unstable or prone to dangerous thermal runaway.




Data Integrity and the &#8220;Silo&#8221; Problem



An AI system is only as reliable as the data it is trained on. Scientific data presents unique challenges because it is often fragmented, poorly contextualized, and siloed across legacy systems.




The Context Gap: Scientific data points are often meaningless without metadata. A pH value of 7.0 is useless without knowing the associated temperature, buffer solution, and measurement equipment. AI models trained on such uncontextualized data produce unreliable predictions that fail to generalize across different experimental conditions.



Siloed Legacy Data: In formulation development, decades data are often trapped in disparate Electronic Lab Notebooks (ELNs), local spreadsheets, or even paper records. This fragmentation makes it impossible to create a unified data foundation for training a comprehensive AI model for the enterprise.



Negative Data Bias: Laboratories typically digitize successful experiments, while failures are rarely recorded with the same level of detail. An AI trained predominantly on positive outcomes develops a &#8220;survivor bias,&#8221; rendering it incapable of accurately predicting what won&#8217;t work—a critical function for saving time and resources.




The &#8220;Hallucination&#8221; of Physical Reality



Generative AI models are probabilistic; they are designed to predict the most statistically likely output, not the one that is physically or chemically correct. This can lead to &#8220;hallucinations&#8221; of scientifically impossible results.




Chemicals &amp; Pharma: AI models have been documented &#8220;hallucinating&#8221; molecular structures that violate fundamental laws of chemistry, such as creating carbon atoms with five bonds or proposing molecules that are synthetically impossible to manufacture.



CPG Formulation: In food science, an AI might generate a recipe that meets all nutritional targets on paper but is physically unstable—for example, an emulsion that separates immediately or a mixture of incompatible proteins.




The Regulatory and Compliance (GxP) Barrier



In regulated industries like pharmaceuticals, trust is a legal and operational concept defined by validation.




Deterministic vs. Probabilistic Models: GxP guidelines and regulations such as 21 CFR Part 11 were built on the assumption that systems are deterministic (Input A always produces Output B). Generative AI, however, is often non-deterministic. Validating a dynamic system that can produce different outputs from the same prompt presents a massive, largely unsolved challenge for regulatory submissions.



Model Drift: An AI model used for Quality Control (QC) can &#8220;drift&#8221; as it learns from new data over time. In a GxP environment, any change in a model&#8217;s behavior may require a complete re-validation, an operationally unfeasible requirement that hinders continuous improvement.




A Framework for Building Trustworthy Scientific AI



Overcoming these challenges requires a shift away from using generic AI and toward adopting systems engineered specifically for science. Trust can be built by constraining probabilistic AI with deterministic scientific principles. This approach rests on three pillars: physics-based validation, proprietary data integration, and a continuous learning cycle.



1. Physics-Based Models: The Scientific Guardrails



Standard generative AI guesses, but a scientifically-aware AI must verify. The first pillar of trust involves pairing generative AI with physics-based modeling. This orthogonal approach acts as an automated &#8220;reality check.&#8221;



When a generative model proposes a new molecule or material, a medicinal chemist takes ownership of the validation phase by transferring the results into physics-based modeling tools. They can refine the AI-generated structure, or the docking of an AI-generated molecule to test how it binds to the target, and verify the structure&#8217;s viability.



2. Proprietary Data: The Context Engine



Public AI models are trained on broad, shallow, and often outdated public data. A trustworthy scientific AI must be trained on narrow, deep, and proprietary information.



Solutions like BIOVIA’s Pipeline Pilot and Generative Therapeutics Design allow organizations to fine-tune AI models with their own proprietary data—including failed experiments, unique assays, and specific chemical libraries—without exposing that IP. This transforms the AI from a generic suggestion engine into a localized expert system. It learns the organization&#8217;s unique language and history, delivering recommendations that are directly relevant to that lab&#8217;s capabilities and strategic goals.



3. The Active Learning Cycle: The Trust Loop



Static AI models degrade over time. A trustworthy AI must be a dynamic system that transparently improves with every new data point. This is achieved through an Active Learning Cycle, often called the Virtual + Real (V+R) Cycle.



The process is a closed loop:




Virtual: The AI designs a new candidate molecule, material, or formulation.



Real: The lab synthesizes and tests the candidate, capturing the results.



Feedback: The new, real-world data is immediately fed back into the AI model to refine it.




If the AI predicts a success and the lab finds a failure, the model is penalized and learns instantly. This transparent, closed-loop system makes the AI demonstrably smarter and more accurate over time, earning the trust of scientists through proven performance. The best way to ensure this loop is maintained is by using a common data model between in silico and real-world data—making it easy to trust both real-world and in silico data equally.



Charting the Path Forward



The journey toward integrating AI into scientific R&amp;D is an enterprise in the truest sense of the word—a bold and complex undertaking with transformative potential. For leaders, the path forward is not to adopt AI wholesale, but to choose systems designed to respect and augment the scientific method.



Standard generative AI is like a brilliant but imaginative artist; it can create anything, but it does not know if its creation can stand. Scientific AI, however, must be that artist paired with a structural engineer (physics-based models) and an archivist who knows the building&#8217;s complete history (proprietary data). Only then can we trust that what is designed is not only innovative but also viable. By adopting this structured approach, organizations can build the trust necessary to unlock the full potential of AI and drive the next generation of scientific breakthroughs.



Unlock the Future with BIOVIA Scientific AI



BIOVIA Scientific AI can transform your R&amp;D processes by combining innovative AI technologies with robust physics-based models and proprietary data. Whether you&#8217;re aiming to accelerate innovation, ensure regulatory compliance, or optimize operational efficiency, our solutions are designed to empower scientific breakthroughs while building trust and reliability. With BIOVIA, you can deliver generative AI to every scientist. 



Learn more about how BIOVIA Scientific AI can address your unique challenges and redefine the speed of R&amp;D.
 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ A Biologist’s Holiday Wish List ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/a-biologists-holiday-wish-list/</link>
      <guid>https://blog.3ds.com/guid/298843</guid>
      <pubDate>Wed, 17 Dec 2025 19:57:46 GMT</pubDate>
      <description>
      <![CDATA[ What biologists really want—from early discovery to biomanufacturing.
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
As 2025 wraps up and labs wind down, scientists across the biologics value chain – from early discovery to bioprocessing and manufacturing – share similar holiday wishes: better candidates likely to succeed in experiments, automated workflows with fewer manual steps, and optimized processes for high-quality yields. With BIOVIA, these wishes come true with the right capabilities, exactly when and where they need them.



🎁 Wish #1: Faster, Smarter Biologics Discovery



With the advances in AI and drug discovery, biologists are now navigating an increasingly complex landscape. AI-driven algorithms such as AlphaFold/OpenFold, RFdiffusion, and ProteinMPNN have opened up new possibilities for rational design – from predicting 3D structures directly from amino acid sequences to designing an entirely new protein from scratch. Yet, integrating these tools efficiently into multi-stage workflows remains challenging. Combining AI-generated predictions with physics-based modeling is also critical to refine structures or evaluate binding and stability. However, all these often mean juggling multiple software tools or command-line executables, and doing a lot of manual work, which can be slow and frustrating.



What BIOVIA delivers:




Easy access to modern AI tools out-of-the-box




BIOVIA Discovery Studio Simulation&nbsp;on the&nbsp;3DEXPERIENCE® platform provides&nbsp;easy access to novel AI algorithms&nbsp;– OpenFold/AlphaFold2, RFdiffusion, and ProteinMPNN – with built-in, ready-to-run protocols that bring Nobel Prize–winning science directly into drug discovery workflows.




Combining the power of AI with molecular modeling




Users can combine the power of AI with validated physics-based molecular modeling to optimize lead candidates. For instance, they can refine their AI-generated structures with molecular dynamics simulations and follow downstream workflows, such as virtual screening via protein-protein docking, or studying and optimizing formulation properties.




Integrated AI-powered discovery workflows for every biologist 




Discovery Studio Simulation offers an automated de novo protein design workflow that accelerates and simplifies the discovery of novel protein binders for a given target. This intelligent, guided process democratizes cutting-edge AI, enabling any biologist to design high-quality candidates with AI, without specialized training.




OpenFold3 Co-folding (Coming Soon)




Discovery Studio Simulation will offer OpenFold3 in early 2026 and expand AI capabilities for structure prediction of biomolecular complexes, starting with protein-ligand co-folding, allowing for prioritization of lead candidates early on.



With&nbsp;Discovery Studio Simulation, biologists can&nbsp;rapidly explore novel designs, evaluate candidates with confidence, and accelerate the path from&nbsp;idea to lab&nbsp;— without needing holiday magic!



🎁 Wish #2: Modern, Compliant, High-throughput Bioprocessing 



Bioprocessing is undergoing a major transformation. As bioprocess labs push toward intensified, automated, and data-rich development, they need to move beyond legacy solutions. Spreadsheets, disconnected ELNs, isolated instruments, and custom scripts can no longer support the speed and complexity of modern biologics development. To thrive in a&nbsp;Bioprocessing 4.0 era, bioprocess teams need modern, cloud-native, automated solutions that eliminate manual tasks, enable structured, AI-ready data capture, improve operational efficiency, and support seamless scale-up — all while maintaining GxP compliance.



What BIOVIA delivers:



• A modern, cloud-native environment built for speed, scale, and Bioprocessing 4.0BIOVIA ONE Lab&nbsp;provides a flexible, secure digital ecosystem designed for today’s high-throughput, data-intensive process development. Teams can run more studies, collaborate seamlessly, and scale workflows as their demands change.



• High-throughput experiment design and executionAutomated planning tools, reusable workflow templates, and integrated sample/recipe management let teams run many more experiments in parallel with far fewer manual steps, accelerating both upstream and downstream development.



• Automation that removes manual work and reduces riskWith&nbsp;ONE Lab’s automated instrument integration and procedure execution capabilities, teams can remove manual steps and minimize errors, and have more time to focus on science.



• Structured, searchable, AI-ready data from day oneScientists can capture data in a standardized, structured format that can be easily compared, analyzed, and fed into AI/ML models. No more piecing together spreadsheets, manual documents, or PDFs.



• Unified materials managementTeams can register and manage complex biologic entities with a common data model, unified across all labs, making sure that IP is protected and context is preserved.



• Built-in GxP compliance across the development lifecycleONE Lab&nbsp;provides audit trails, e-signatures, controlled workflows, and secure, validated data handling. Teams maintain full compliance from early DoE through development.



• Tech transfer made smooth and predictableBecause process definitions, parameters, and historical data are captured in an industry-standard format (S88), development teams can hand off processes to manufacturing far more easily. This reduces friction, prevents ambiguity, and accelerates scale-up into GMP operations.



ONE Lab&nbsp;is&nbsp;the&nbsp;modern, cloud-native solution that the bioprocess teams have on their holiday wish list — where&nbsp;experiments scale faster, data flows seamlessly, automation reduces manual work, and decisions are made with real-time insight, with&nbsp;GxP compliance at each stage.



🎁 Wish #3: Efficient, Compliant, Data-driven Smart Manufacturing



Scaling biologics manufacturing is complex. Manual tasks, fragmented systems, and disconnected data make it difficult to track production, ensure quality, and respond quickly to problems. Maintaining GxP compliance across multiple sites adds another layer of difficulty.



What BIOVIA delivers:



• Single source of truth for all dataBIOVIA Discoverant brings together data from equipment, MES, LIMS, QC labs, and batch records into one central location. Teams can monitor operations in real-time, trend key parameters, and quickly spot anomalies. No more hunting through spreadsheets or disparate systems.



Users can also sync a particular hierarchy to the&nbsp;3DEXPERIENCE platform, automatically pulling in all data from every batch and every parameter linked to that hierarchy in real-time, ensuring manufacturing data is centralized, always current, and accessible.



•&nbsp;Faster root-cause analysis and proactive interventionsWith real-time analytics and statistical modeling, teams can discover process deviations in minutes instead of days and act before the quality or yield is impacted.



•&nbsp;Optimized, predictable operationsContinuous monitoring of critical process parameters and quality attributes helps maximize yields, reduce waste, and ensure batch-to-batch consistency. Manufacturing becomes more efficient, reliable, and scalable.



•&nbsp;Built-in GxP compliance and audit readinessDiscoverant&nbsp;ensures data integrity with secure, traceable, validated workflows, supporting 21 CFR Part 11 compliance principles. Every trend, comparison, and decision is fully documented for regulatory inspections.



•&nbsp;Smoother hand-off from development to productionBecause process definitions, parameters, and historical data are standardized and structured, manufacturing teams can adopt new processes with confidence, reducing friction, and accelerating scale-up, therefore time-to-market.



With&nbsp;Discoverant, manufacturing&nbsp;moves from reactive, manual operations to proactive, insight-driven, and fully compliant production, helping biopharma teams&nbsp;deliver high-quality biologics at scale, on time, and with confidence.



⭐ The Holiday Wrap-up: From Design to Manufacturing, BIOVIA Delivers



From discovery through bioprocessing and into manufacturing, BIOVIA ties together discovery, development, and manufacturing with a perfect bow to help R&amp;D teams deliver high-quality biologics faster. Make this holiday season magical for your biologists –&nbsp;schedule a call with an expert.
















 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ From Data to Decisions ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/from-data-to-decisions/</link>
      <guid>https://blog.3ds.com/guid/298759</guid>
      <pubDate>Wed, 10 Dec 2025 21:21:10 GMT</pubDate>
      <description>
      <![CDATA[ How Real-Time Analytics Enables Faster Pharmaceutical Release
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 

Today’s pharmaceutical environment, speed is not just a competitive advantage — it is a patient requirement. Every hour spent waiting for batch disposition delays lifesaving therapies reaching patients who depend on them.



-Larry Fiegland, Discoverant Product Manager




Yet, despite major investments in Industry 4.0, many manufacturers still wait days or weeks for data consolidation, manual review cycles, or investigation resolution before releasing their products.



The disconnect is rarely technology hardware. Plants generate enormous volumes of data. The challenge is turning that data into timely, contextual decisions.



Real-time analytics is changing that. Solutions such as Dassault Systèmes’ Discoverant platform are changing that dynamic. &nbsp;Instead of waiting for end-of-batch analysis, organizations are using continuous monitoring, multivariate insight, and role-specific visualization to accelerate release by exception, reduce investigations, and increase confidence in product quality.



The Problem: Data Exists — Insight Doesn’t



Every pharmaceutical batch produces:




instrument data



in-process quality attributes



environmental measurements



offline testing results



operator and equipment event logs




&nbsp;But too often, these are siloed in disparate systems — LIMS, historian platforms, MES, spreadsheets, or even paper records. Quality teams spend significant time pulling, restoring, reconciling, and validating the data before they ever analyze it.



That latency matters. Slow insight means:




Deviations are discovered too late



Investigation cycles lengthen



Batches wait for approval



Risk decisions are made without complete visibility




The result is both operational delay and regulatory frustration.



Real-Time Analytics Changes the Equation



Real-time is not about “fast charts.” It is about enabling decisions while the process is occurring.



For example, Discoverant continuously ingests and contextualizes process, quality, and historian data into a structured process record. With that foundation, teams can:



1. Detect process drift before it becomes a deviation



Multivariate control models capture relationships across process parameters, not just single thresholds. When the interaction between variables becomes unstable, alerts allow intervention while the batch is still recoverable.



2. Compress investigation cycle time



Root causes are not hidden across systems — data context is preserved and available instantly. Process engineers can review trends, batch comparison, and historical signatures rapidly, avoiding days of manual data wrangling.



3. Empower exception-based release



Instead of reviewing every parameter for every batch, quality and release teams can focus only where risk exists — accelerating disposition without reducing scrutiny.



4. Drive learning across products and sites



Discoverant aggregates process intelligence over time, enabling pattern recognition, recurring deviation detection, and capability tracking across products and facilities.



Case Example: Release Without Waiting



Imagine a fermentation process where performance hinges on precise control of temperature, pH, nutrient feed rate, and a specific dissolved oxygen trajectory.



Traditional monitoring watches these individually. But acceptable values alone do not guarantee an acceptable product — the relationship between them matters.



A multivariate monitoring model built on historical successful runs learns the expected interaction among these variables. Now, on each batch:




The system identifies emerging deviation fingerprints hours before sampling shows impact



Engineers can adjust parameters proactively



Data and model evidence go directly into the release dossier




The effect? Release happens faster because confidence arrives earlier.Instead of waiting for offline assay results to identify a bad batch, the batch is guided to success — or flagged early if risk increases.



Why the Shift Matters Now



Three forces are converging:



1. Regulatory Encouragement for Data-Driven Control



&nbsp;Agencies increasingly endorse continued process verification, process knowledge, and real-time review (ICH Q10/Q12, FDA emerging technology focus). Solutions like Discoverant help demonstrate process understanding, traceability, and knowledge management maturity.



2. Margin Pressure Requires Leaner Decision Cycles



With pipeline uncertainty, loss of exclusivity, and global complexity, wasted time directly affects operating margin.



3. Workforce Knowledge Gap



Newer staff have less tacit process knowledge. Good analytics compensate by guiding interpretation and elevating risk signals.



Real-time analytics is therefore both a productivity tool and a risk-management tool.



The Path to Real-Time Release Capability



Organizations often think this transformation requires new sensors or new systems. In reality, the critical steps are:



1. Connecting existing data sources



Integrate control systems, lab data, quality records, and event logs into a common analytical backbone — with lineage and integrity preserved.



2. Establishing contextualized models of process behavior



This includes process fingerprinting, multivariate control models, adaptive limits, and golden batch monitoring.



3. Delivering role-specific visibility




Operators get actionable status and alarms



Engineers receive diagnostics and capability analytics



QA receives automated batch assessments and exception flags




4. Culture change from reporting to learning



Discoverant’s analytics shift the organization away from passive documentation toward active insight and intervention.



The Impact: Faster, Safer, More Confident Release Decisions



Companies that achieve real-time analytical maturity report:




Reduced investigation cycle time



Fewer unplanned excursions



Shorter release timelines



Higher transparency during regulatory inspection



Stronger cross-site knowledge sharing




Most importantly, they deliver the product more reliably to patients.



Conclusion: Speed with Confidence is the New Mandate



The pharmaceutical industry does not suffer from lack of data — it suffers from lack of timely understanding. Real-time analytics bridges this gap by converting raw information into actionable decisions during the process, not weeks after it.



Faster release is not about cutting corners — it is about knowing earlier, responding earlier, and proving control earlier.



Companies adopting capabilities like Discoverant aren’t just accelerating batch disposition; they are transforming how process knowledge is used — from hindsight to foresight — and that may be the most powerful shift happening in pharmaceutical operations today.



Learn how BIOVIA Discoverant can help you unlock the full value of your laboratory and manufacturing data.












 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ Turn On a Light with Quantum Chemistry ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/turn-on-a-light-with-quantum-chemistry/</link>
      <guid>https://blog.3ds.com/guid/295407</guid>
      <pubDate>Thu, 04 Dec 2025 21:42:17 GMT</pubDate>
      <description>
      <![CDATA[ Modeling OLEDs with TURBOMOLE
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
&#8220;Light, seeking light, doth light of light beguile&#8221;, says a mocker in Shakespeare’s Love’s Labour’s Lost. This poetic musing captures a paradox inherent to light itself. Light is an everyday phenomenon, but its generation and absorption at the molecular level are still the subject of scientific research.



OLEDs: What Makes Them Shine?



Organic Light -Emitting Diodes (OLEDs) have transformed modern displays and lighting technologies. They power smartphones, televisions, laptops, VR/AR headsets, cars, medical devices, and ambient lighting. Their strengths are well known: vivid colors, thin form factors, flexibility, high contrast, fast response, and wide viewing angles.







Figure 1: Scheme of an OLED stack. Each layer plays a critical role in efficiency and color.



An OLED is a stack composed of several layers of organic materials, each with a specific function. A sketch is shown in Figure 1. At the core is the emission layer, where light is generated. This layer is typically located between an electron transport layer (ETL) and a hole transport layer (HTL), all positioned between a negatively charged cathode and a positively charged anode. When a voltage is applied, electrons are injected from the cathode, while holes (positive charge carriers) are created at the anode. These charge carriers drift through their respective transport layers towards the emission layer, where they meet and recombine to form excited states called excitons. When these excitons relax to their ground state, they release energy in the form of light. The color of the emitted light is determined by the electronic structure of the emitter molecules. Therefore, precise chemical design is critical to OLED performance.Despite their advantages, OLEDs still face a number of unresolved issues that researchers are actively working to solve. Major issues are lifetime and stability, as the materials are exposed to light and moisture, as well as constant electrical voltage and charge flow. This easily leads to decomposition reactions. Improving emitter and transport materials as well as optimizing device architectures are key research directions for novel stacks.For the development of stable and efficient emitters, understanding how light is emitted is important. Different processes, like fluorescence and phosphorescence are possible. While phosphorescent OLEDs achieve nearly 100% internal quantum efficiency, fluorescent materials are still lacking this efficiency, resulting in higher power consumption in displays. The superior efficiency of phosphorescent OLEDs still comes at a price: Their reliance on rare heavy metals increases cost and sustainability concerns. Researchers are also exploring novel materials and concepts, such as thermally activated delayed fluorescence (TADF) and hybrid organic-inorganic emitters, to further expand the possibilities of producing light.Other materials in an OLED stack, like electron/hole transport or blocking layers, can also be further optimized with an understanding of transport processes and adjusting conduction level.



Quantum Chemistry: Can We Understand Molecular Processes?



In the search for better materials, combining experiments with computational methods can have several advantages. Predicting the optical and electronic properties of helps to select the most promising candidates for further investigation. For instance, a large virtual chemical space can be screened for specific properties prior to synthesizing and measuring molecules. This can speed up the development process and reduce costs.Computational methods uncover details of molecular and electronic structures that experiments alone can rarely capture. . Quantum chemistry, a branch of computational chemistry which applies quantum mechanical principles to chemical systems, is in particular an important method in OLED research. This is because other approaches, such as mechanistic or cheminformatics methods, lack information about electronic structure of molecules.







Figure 2: Scheme of fluorescence and phosphorescence processes.



Quantum chemistry is often viewed by chemistry students as something between difficult and useless or both. However, in OLED research in particular, it is worthwhile dealing with it, as it is the only way to understand fundamental mechanisms ab initio, i.e., largely without empirical assumptions.



Charge transport and light emission are at the heart of OLED functionality. Understanding both requires a correct mathematical description of electrons in molecules. The Schrödinger equation provides this description. Solving this equation with quantum chemical approaches yields internal energy levels and various properties can be derived. These energy levels determine, for example, the position of molecular orbitals that enable charge transport and the difference of electronic states involved in the excitation processes. For charge transport, the knowledge of the frontier orbitals of a molecule is of interest: the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbitals (LUMO), see Figure 3 for an example. Holes as positive charge carriers are created when an electron is removed from the HOMO, and the injection of electrons into the LUMO creates the negative charge carriers. Light is emitted when an electron relaxes from an excited state to its ground state and releases emission energy in the form of a photon, see Figure 2.It is true that the details of quantum chemistry are difficult, but the application of quantum chemistry is also useful in real-life problems [1].







Figure 3: The frontier orbitals of Ir(ppy)3, a highly successful green phosphorescent emitter.



TURBOMOLE: How Can Computations Be Done In a Smart Way?



Quantum chemistry is not only daunting for beginners due to its mathematical formalism, but also for the user due to a zoo of methods and a plethora of shortcuts. Democratized solutions can help many users find their way through this jungle. There are various free and commercial programs available. These programs differ in terms of the selection of available methods, the efficiency with which they are implemented, how user-friendly they are, and how stable their programming is. In addition, they are often characterized by specialization or main areas of application. All of these factors should be considered when choosing a particular solution. BIOVIA offers the highly optimized software suite TURBOMOLE for quantum chemical computation [1].



Figured 4: TURBOMOLE is the quantum chemical engine underlying various user interfaces (UIs).



Since its beginning as a Unix command line tool set, TURBOMOLE has become more and more democratized over the years. The graphical user-interface TmoleX is freely available for easy input creation, job processing, and result display [2]. With the implementation in Pipeline Pilot [3], TURBOMOLE can be integrated in scientific workflows. The 3DEXPERIENCE platform will make the value of quantum chemistry with TURBOMOLE obvious to non-experts. Figure 4 shows TURBOMOLE’s natural habitat.







Figured 4: TURBOMOLE is the quantum chemical engine underlying various user interfaces (UIs).



Accompanying democratization, features have been added to the package over the years. According to the list of quantum chemistry software in Wikipedia, TURBOMOLE is currently the package with the most achieved feature characteristics, see Figure 5. Among the many features, lots that are important in OLED research are also included in the program package.







Figure 5: Source: adapted from January 2025.



The workhorse for studying electronically excited states is time-dependent density functional theory (TD-DFT). With the semiempirical TDDFT-ris method, also a much faster approach for calculating TDDFT UV-Vis absorption is also available. For more accurate predictions, TURBOMOLE provides implementations of the GW approximation and Bethe–Salpeter equation (GW/BSE), the approximate coupled-cluster singles and doubles model CC2 and the algebraic diagrammatic construction through second-order [ADC(2)]. GW/BSE is post-processing of DFT calculations, while CC2 and ADC(2), in contrast, are non-DFT- based methods. Both significantly improve the accuracy for charge-transfer and triplet states. An overview of these methods and their typical applications is summarized in Table 1.







To capture the full complexity of photochemical processes, a combination of quantum-mechanical methods is required. A comprehensive description typically includes:• Sound quantum mechanics for both ground and electronically excited states• Accurate energies of the ground state as well as the singlet and triplet excited states• Relaxation of excited-state geometries, since molecular structures can change dramatically upon excitation• Vibrational frequencies of both ground and excited states, essential for spectra and dynamics• Solvation effects, because the environment strongly influences excitation and relaxation pathways• Spin-orbit coupling effects to enhance the efficiency by enabling the formation of triplet excitons into light-emitting states



For specialized studies, TURBOMOLE also provides advanced capabilities like quadratic response theory, vibrationally resolved spectroscopy, radiationless decay pathways, or nonadiabatic dynamics simulations. This means that the appropriate method for a certain photochemical problem can be selected depending on the task—from fast screening of molecular properties to highly accurate computations and advanced simulations of light-driven molecular processes.



Conclusion



In conclusion, OLED research sits at the intersection of molecular design, materials science, and quantum chemistry. Tools like BIOVIA TURBOMOLE— a powerful quantum-chemistry package from BIOVIA, DASSAULT SYSTEMES enable scientists to navigate this frontier swiftly and precisely, bringing brighter, more efficient displays from theory into our daily lives.



As computational power advances and computational methods become more accessible, quantum chemistry will play an increasingly critical and democratic role in accelerating materials research. With this foundation in place, the next act is poised to shine even more brilliantly.



Many thanks to Uwe Huniar for the enlightening conversations and for providing the images that helped bring the technical concepts in this article to life.



References



[1] Sree Ganesh Balasubramani, Guo P. Chen, Sonia Coriani, Michael Diedenhofen, Marius S. Frank, Yannick J. Franzke, Filipp Furche, Robin Grotjahn, Michael E. Harding, Christof Hättig, Arnim Hellweg, Benjamin Helmich-Paris, Christof Holzer, Uwe Huniar, Martin Kaupp, Alireza Marefat Khah, Sarah Karbalaei Khani, Thomas Müller, Fabian Mack, Brian D. Nguyen, Shane M. Parker, Eva Perlt, Dmitrij Rappoport, Kevin Reiter, Saswata Roy, Matthias Rückert, Gunnar Schmitz, Marek Sierka, Enrico Tapavicza, David P. Tew, Christoph van Wüllen, Vamsee K. Voora, Florian Weigend, Artur Wodyński, Jason M. Yu (2020). TURBOMOLE: Modular program suite for ab initio quantum-chemical and condensed-matter simulations. Journal of Chemical Physics; 152 (18), 184107.[2] Claudia Steffen, Klaus Thomas, Uwe Huniar, Arnim Hellweg, Olvier Rubner, Alexander Schroer. (2010). TmoleX-a graphical user interface for TURBOMOLE. Journal of Computational Chemistry, 31(16), 2967-2970.



[3] Pipeline Pilot.







📩Want to find out the latest news about BIOVIA events, customer stories, blogs and more?&nbsp;Join the newsletter today!
 ]]>
      </content:encoded>
      </item>
<item>
      <title>
      <![CDATA[ BIOVIA Live 2025 Recap ]]>
      </title>
      <link>https://blog.3ds.com/brands/biovia/biovia-live-2025-recap/</link>
      <guid>https://blog.3ds.com/guid/295105</guid>
      <pubDate>Thu, 20 Nov 2025 20:02:27 GMT</pubDate>
      <description>
      <![CDATA[ Shaping Science Together
 ]]>
      </description>
      <content:encoded>
      <![CDATA[ 
BIOVIA Live 2025 brought the global science community together in Dublin, Ireland, for three days focused on making real progress in research, technology, and collaboration. We had leading scientists, industry leaders, engineers, and partners on hand to share how they&#8217;re actually using these tools, explore new tech, and figure out where innovation is headed next. The event highlighted the growing impact of AI, modeling, and digital science—and showcased where innovation is headed next.



Event by the Numbers




400+ attendees



70+ sessions and workshops



25 customer speakers



4 dynamic tracks



3 days of hands-on AI powered breakthroughs in digital science




Making Science Better with AI and Teamwork



The main keynotes gave us an exciting look at the future of science-based industries. BIOVIA leaders and speakers from top companies discussed the opportunities and challenges of integrating digital transformation with scientific research. A lot of attention was given to the 3DEXPERIENCE platform and how it helps organizations tie together virtual modeling with actual lab experiments.



We saw some great real-world examples of how BIOVIA’s solutions are helping to:




Speed up lab work



Shorten timelines for drug discovery



Improve quality management



Make R&amp;D more sustainable




One attendee summed it up pefectly:




This was my first time at a BIOVIA event and I was thoroughly impressed how mature AI was and to the degree it was fully embedded in the breadth of their solutions.




Great Sessions: AI Innovation in Action



The agenda was packed with diverse topics, all backed up by practical demos and measurable results.




&#8220;Virtual Twin&#8221; in Life Sciences: Top pharma teams explained how dynamic virtual models and AI are changing drug development. They are making candidate selection more accurate, reducing clinical trial risks, and expediting the delivery of treatments to patients. People got to hear firsthand how companies are using BIOVIA modeling and simulation to optimize molecular design, while also keeping costs down.



Lab of the Future: Organizations that have implemented BIOVIA ONE Lab showed how they are moving away from old, paper-based processes to fully digitalized labs, creating AI-ready data. Presenters shared how unified systems reduce errors, guarantee data integrity, and give scientists more time for critical thinking and analysis.



Materials Innovation: Researchers shared how BIOVIA Materials Studio is helping them quickly design and simulate sustainable polymers and new battery technologies. Examples from universities and corporate R&amp;D teams showed how molecular modeling helps predict real-world material traits, leading to safer products and &#8220;greener&#8221; processes.




Keynote session.



Deep Dives: AI Workshops and Hands-On Training



The workshops and hands-on training offered in-depth explorations of AI modeling, molecular design, and automating workflows. BIOVIA experts led guided tutorials and clinics where attendees could test out new features in their own research environments.



Some of the practical modules included:




AI-Enhanced Data Science with Pipeline Pilot: Attendees learned advanced ways to process complex research data, including using real examples to build custom predictive models.



Digitalized Lab Execution: Participants set up and fine-tuned execution protocols inside BIOVIA ONE Lab, seeing the direct impact on sample tracking and compliance.



Generative Design in Therapeutics: Researchers checked out tools for AI-based molecular generation, allowing them to quickly evaluate new candidates for drug development.




Feedback showed everyone really valued the in-person teaching, getting to ask subject-matter experts questions directly, and the chance to try out solutions collaboratively.



Hands-On Training



🤝 Connections That Inspire: Networking Highlights



Networking was built into the whole event, from organized breakout discussions to casual meet-ups. Topic-specific lounges got conversations going on things like compliance, sustainability, and digital transformation—helping people troubleshoot issues and share best practices. The evening welcome reception was a relaxed setting for deeper chats, and the BIOVIA band&#8217;s live show was a major highlight that kept the conversations going late into the night.







As always, lasting connections were made not just in sessions but in the hallways and over lunch. The mix of different perspectives created a lot of energy, great debate, and momentum for future cross-discipline collaborations.







Join the Conversation



Couldn’t make it to Dublin? Find out the where the next BIOVIA Live is, stay connected and join the BIOVIA the newsletter today! 



A big thank you to every speaker, customer, and partner who helped make BIOVIA Live 2025 a success. To keep up with the latest in scientific innovation and get exclusive content, we invite you to join a BIOVIA Community.



We look forward to moving ahead together as we keep driving innovation in science, technology, and sustainable discovery.








 ]]>
      </content:encoded>
      </item>
    </channel>
   </rss>