Cefic-lri Programme | European Chemical Industry Council

LRI AIMT3 Predictive toxicology using ‘omics’, high-throughput data, and cheminformatics – Deadline: 31 august 2011

Background

Current toxicity testing methods have been predictive of human risk, but are resource- and animal-intensive.   Because there is a limited capacity to test using traditional methods, a full battery of tests is generally available only for compounds that are likely to have significant biological activity (e.g., drugs and pesticides) or significant production volume or potential for widespread exposure.  Assessment of other compounds has been through a variety of exposure-based considerations (e.g., present only as a reaction intermediate in a closed system), more limited toxicity assessments (e.g., Base Set and Level 1), and in some instances reliance on structural similarity to well-tested compounds.

There is increasing pressure, through REACH and other initiatives, to generate more complete information sets on every chemical in commerce.  The methods of traditional toxicology testing are insufficient to meet this information need.  There is an opportunity to use state-of-the-art biotechnology and informatics methodologies to fill this void.  Toxicogenomics and metabolomics have already been shown to be sensitive methods that can predict toxic mode of action (Hamadeh et al., 2002; van Ravenzwaay et al., 2010).  Most of this research has used in vivo models, but there is also a growing literature indicating that gene expression (Kienhaus et al., 2011; Naciff et al., 2011) and metabolome (West et al., 2010; Balcke et al., 2011) changes in certain in vitro models is also predictive of toxicity.   Automated high-throughput in vitro assay systems, such as EPA's ToxCast, have also been proposed as a method for generating data on the adverse effects of a large number of key cellular and biochemical processes.  These methods are most powerful when used in combination with data on chemicals with well-characterized toxicity.   For example, data on the gene expression induced by phthalates in the developing rat reproductive system can clearly delineate between compounds that affect testicular development and those that do not (Liu et al., 2005).

Cheminformatic methods are beginning to come into their own as robust databases such as DSSTox, ACToR and others have become widely available.  These databases contain listings of all toxicity study data that is publicly available, either in the peer-reviewed literature or in publicly accessible regulatory submissions.  The databases are searchable by chemical substructure, making it possible to identify analogues that have already been tested.  Various rules can then be applied to the output of such searches to identify which analogues are suitable for comparison (Wu et al., 2010).  Searches for relevant analogues can be thought of as hypothesis-generating exercises:  analogues with robust toxicity data sets can be thought of as predictors of the toxicity potential of a structurally-related chemical.  Of course, such hypotheses need to be tested.  In some cases the hypothesis will be centred on the chemical's ability to be metabolized to the analogue, but in many cases it will be predicated on the analogues having similar mode of action.  Omics and high-throughput assay methods provide the most efficient way to compare biological activities of analogues.  They are predictive of mode of action, and also are able to rule out (or not) the possibility of additional modes of action not predicted by the analogue data set.

Objectives

The overall objective of the proposed research is to provide a framework by which high-information-content (omics) and/or high-throughput data streams can be used to predict toxicity of new chemicals and to fill in the gaps in toxicity data sets that can now only be filled by traditional animal testing methods.  The framework should provide a scientifically valid process by which novel data streams can replace traditional testing, especially for repeated-dose toxicity.  The research should also focus on the reduction or replacement of animals needed to evaluate repeated-dose toxicity.  For these reasons, in vitro testing in order to generate reliable data sets could be part of this research, if appropriate.

 

Scope

Deliverables:

  1. Development of a framework for incorporating high-information-content or high-throughput data streams into a scientifically-based predictor of repeated-dose toxicity.
  2. Evaluation of the framework through the generation of a critical mass of high-information-content data to support the testing of hypotheses generated by chemical analogue identification. This includes the incorporation of existing data (in public and private databases) as well as newly generated data in a format that is easily transferable (platform independent).
  3. Providing a process for using high-information content data to improve expert-rules and to incorporate high-info-content data into chemical databases to support better hypothesis generation.  This includes the incorporation of existing data (in public and private databases).

Download here the pdf document.

Related links

Reference

ECETOC. 2009.  Advanced technologies in read-across for chemical risk assessment.  Technical Report No. 109.  European Centre for Ecotoxicology and Toxicology of Chemicals, Brussels, Belgium

Timing: 3 years

LRI funding: 450,000 euros

Cefic-Lri Programme Responsible Care

Terms and Conditions of Use | Privacy Policy | Cookie Policy | Coockie Settings

© Copyright 2017 Cefic | European Chemical Industry Council. All rights reserved.