Tuesday, July 31, 2007

Atomic absorption spectroscopy

Atomic absorption spectroscopy
is a technique for determining the concentration of a particular metal element in a sample. Atomic absorption spectroscopy can be used to analyse the concentration of over 62 different metals in a solution.

The technique typically makes use of a flame to atomize the sample, but other atomizers such as a graphite furnace are also used. Three steps are involved in turning a liquid sample into an atomic gas:

- Desolvation – the liquid solvent is evaporated, and the dry sample remains
- Vaporisation – the solid sample vaporises to a gas
- Volatilisation – the compounds making up the sample are broken into free atoms.

The flame is arranged such that it is laterally long (usually 10cm) and not deep. The height of the flame must also be monitored by controlling the flow of the fuel mixture. A beam of light passes through this flame at its longest axis (the lateral axis) and hits a detector.

The light that is focused into the flame is produced by a hollow cathode lamp. Inside the lamp is a cylindrical metal cathode containing the metal for excitation, and an anode. When a high voltage is applied across the anode and cathode, the metal atoms in the cathode are excited into producing light with a certain emission spectra. The type of hollow cathode tube depends on the metal being analysed.
For analysing the concentration of copper in an ore, a copper cathode tube would be used, and likewise for any other metal being analysed. The electrons of the atoms in the flame can be promoted to higher orbitals for an instant by absorbing a set quantity of energy (a quantum).
This amount of energy is specific to a particular electron transition in a particular element. As the quantity of energy put into the flame is known, and the quantity remaining at the other side (at the detector) can be measured, it is possible to calculate how many of these transitions took place, and thus get a signal that is proportional to the concentration of the element being measured.

Enzyme Immunoassay (Add-on)

Agglutination

is the clumping of particles. The word agglutination comes from the Latin agglutinare, "to glue to."

This occurs in biology in three main examples:

1. The clumping of cells such as bacteria or red blood cells, in the presence of an antibody. The antibody or other molecule binding with multiple particles, and joining them.

2. The coalescing of small particles that are suspended in solution; these larger masses are then (usually) precipitated.

3. An allergic reaction type occurrence where cells become more compacted together to prevent foreign materials entering them. This is usually the result of an antigen in the vicinity of the cells.

http://en.wikipedia.org/wiki/Agglutination_(biology)

How Genetic Modified Food came abt!


Labeled probe


Toxicology analysis

Dose-response relationship
describes the change in effect on an organism caused by differing levels of exposure (or doses) to a stressor (usually a chemical). This may apply to individuals (eg: a small amount has no observable effect, a large amount is fatal), or to populations (eg: how many people are affected at different levels of exposure).

Studying dose response, and developing dose response models, is central to determining "safe" and "hazardous" levels and dosages for drugs, potential pollutants, and other substances that humans are exposed to. These conclusions are often the basis for public policy.
When the agent is radiation instead of a drug, this is called the exposure-response relationship.

Dose-response curve

A dose-response curve is a simple X-Y graph relating the magnitude of a stressor (e.g. concentration of a pollutant, amount of a drug, temperature, intensity of radiation) to the response of the receptor (e.g. organism under study). The response is usually death (mortality), but other effects (or endpoints) can be studied.

The measured dose (usually in milligrams, micrograms, or grams per kilogram of body-weight) is generally plotted on the X axis and the response is plotted on the Y axis. Commonly, it is the logarithm of the dose that is plotted on the X axis, and in such cases the curve is typically sigmoidal, with the steepest portion in the middle.

The first point along the graph where a response above zero is reached is usually referred to as a threshold-dose. For most beneficial or recreational drugs, the desired effects are found at doses slightly greater than the threshold dose. At higher doses still, undesired side effects appear and grow stronger as the dose increases.
The stronger a particular substance is, the steeper this curve will be. In quantitative situations, the Y-axis usually is designated by percentages, which refer to the percentage of users registering a standard response (which is often death, when the 50% mark refers to LD50). Such a curve is referred to as a quantal dose response curve, destinguishing it from a graded dose response curve, where response is continuous.

Problems exist regarding non-linear relationships between dose and response, thresholds reached and 'all-or-nothing' responses. These inconsistencies can challenge the validity of judging causality solely by the strength or presence of a dose-response relationship.
lethal dose (LD)
is an indication of the lethality of a given substance or type of radiation. Because resistance varies from one individual to another, the 'lethal dose' represents a dose (usually recorded as dose per kilogram of subject body weight) at which a given percentage of subjects will die.
The most commonly-used lethality indicator is the LD50 (or LD50), a dose at which 50% of subjects will die. LD measurements are often used to describe the power of venoms in animals such as snakes.

Animal-based LD measurements are a commonly-used technique in drug research, although many researchers are now shifting away from such methods.

LD figures depend not only on the species of animal, but also on the mode of administration. For instance, a toxic substance inhaled or injected into the bloodstream may require a much smaller dosage than if the same substance is swallowed.

LD values for humans are generally estimated by extrapolating results from testing on animals or on human cell cultures. One common form of extrapolation involves measuring LD on animals like mice or dogs, converting to dosage per kilogram of biomass, and extrapolating to human norms.
While animal-extrapolated LD values are correlated to lethality in humans, the degree of error is sometimes very large. The biology of test animals, while similar to that of humans in many respects, sometimes differs in important aspects.
For instance, mouse tissue is approximately fifty times less responsive than human tissue to the venom of the Sydney funnelweb. The square-cube law can also complicate the scaling relationships involved.

Currently, the only known LD50 values obtained directly on humans are from Nazi human experimentation
Acceptable Daily Intake
or - ADI is a measure of the amount a specific substance (usually a food additive, or a residue of a veterinary drug or pesticide) in food or drinking water that can be ingested (orally) over a lifetime without an appreciable health risk. ADIs are expressed by body mass, usually in milligrams (of the substance) per kilograms of body mass per day.

This concept was first introduced in 1957 by the Council of Europe and later the Joint Expert Committee on Food Additives (JECFA), a committee maintained by two United Nations bodies: the Food and Agriculture Organization FAO and the WHO World Health Organization.

An ADI value is based on current research, with long-term studies on animals and observations of humans. First, a No Observable (Adverse) Effect Level, the amount of a substance that shows no toxic effects, is determined on the basis of studies intended to measure an effect at several doses.
Usually the studies are performed with several doses including high doses. In case there are several studies on different effects, it is usually taken the lowest NO(A)EL.
Then, the NOEL (or NOAEL) is scaled by a safety factor, conventionally 100, to account for the differences between test animals and humans (factor of 10) and possible differences in sensitivity between humans (another factor of 10). The ADI is usually given in mg per kg body weight per day. Note that the ADI is considered a safe intake level for the healthy adult of normal weight who consumes in average daily the amount of the substance in question.

The higher the ADI, the "safer" for regular ingestion is a compound.

The ADI concept can be understood as a measure to indicate the toxicity from long-term exposure to repeated ingestion of chemical compounds in foods (present and/or added), as opposed to acute toxicity.

Detection method for GM food

The detection of genetically modified organisms (GMOs) in food or feed is possible by biochemical means. It can either be qualitative, showing which GMO is present, or quantitative, measuring in which amount a certain GMO is present. Being able to detect a GMO is an important part of food safety, as without detection methods the traceability of GMOs would rely solely on documentation.
Polymerase chain reaction (PCR)

The polymerase chain reaction (PCR) is a biochemistry and molecular biology technique for isolating and exponentially amplifying a fragment of DNA, via enzymatic replication, without using a living organism. It enables the detection of specific strands of DNA by making millions of copies of a target genetic sequence. The target sequence is essentially photocopied at an exponential rate, and simple visualisation techniques can make the millions of copies easy to see.
The method works by pairing the targeted genetic sequence with custom designed complimentary bits of DNA called primers. In the presence of the target sequence, the primers match with it and trigger a chain reaction. DNA replication enzymes use the primers as docking points and start doubling the target sequences. The process is repeated over and over again by sequential heating and cooling until doubling and redoubling has multiplied the target sequence several million-fold. The millions of identical fragments are then purified in a slab of gel, dyed, and can be seen with UV light.

Quantitative detection

Quantitative PCR (Q-PCR) is used to measure the quantity of a PCR product (preferably real-time, QRT-PCR). It is the method of choice to quantitatively measure amounts of transgene DNA in a food or feed sample. Q-PCR is commonly used to determine whether a DNA sequence is present in a sample and the number of its copies in the sample. The method with currently the highest level of accuracy is quantitative real-time PCR. QRT-PCR methods use fluorescent dyes, such as Sybr Green, or fluorophore-containing DNA probes, such as TaqMan, to measure the amount of amplified product in real time. If the targeted genetic sequence is unique to a certain GMO, a positive PCR test proves that the GMO is present in the sample.

Qualitative detection

Whether or not a GMO is present in a sample can be tested by Q-PCR, but also by multiplex PCR. Multiplex PCR uses multiple, unique primer sets within a single PCR reaction to produce amplicons of varying sizes specific to different DNA sequences, i.e. different transgenes. By targeting multiple genes at once, additional information may be gained from a single test run that otherwise would require several times the reagents and more time to perform. Annealing temperatures for each of the primer sets must be optimized to work correctly within a single reaction, and amplicon sizes, i.e., their base pair length, should be different enough to form distinct bands when visualized by gel electrophoresis.
Near infrared fluorescence (NIR)

Near infrared fluorescence (NIR) detection is a method that can reveal what kinds of chemicals are present in a sample based on their physical properties. By hitting a sample with near infrared light, chemical bonds in the sample vibrate and re-release the light energy at a wavelength characteristic for a specific molecule or chemical bond. It is not yet known if the differences between GMOs and conventional plants are large enough to detect with NIR imaging. Although the technique would require advanced machinery and data processing tools, a non-chemical approach could have some advantages such as lower costs and enhanced speed and mobility.

Identification of Foodborne Pathogen

The Enzyme-Linked ImmunoSorbent Assay, or ELISA,
is a biochemical technique used mainly in immunology to detect the presence of an antibody or an antigen in a sample. The ELISA has been used as a diagnostic tool in medicine and plant pathology, as well as a quality control check in various industries. Performing an ELISA involves at least one antibody with specificity for a particular antigen. The sample with an unknown amount of antigen is immobilized on a solid support (usually a polystyrene microtiter plate) either non-specifically (via adsorption to the surface) or specifically (via capture by another antibody specific to the same antigen, in a "sandwich" ELISA). After the antigen is immobilized the detection antibody is added, forming a complex with the antigen. The detection antibody can be covalently linked to an enzyme, or can itself be detected by a secondary antibody which is linked to an enzyme through bioconjugation. Between each step the plate is typically washed with a mild detergent solution to remove any proteins or antibodies that are not specifically bound. After the final wash step the plate is developed by adding an enzymatic substrate to produce a visible signal, which indicates the quantity of antigen in the sample. Older ELISAs utilize chromogenic substrates, though newer assays employ fluorogenic substrates with much higher sensitivity. In simple terms, an unknown amount of antigen in a sample is immobilized on a surface. One then washes a particular antibody over the surface. This antibody is linked to an enzyme that visibly reacts when activated, say by light hitting it in the case of a fluorescent enzyme; the brightness of the fluorescence would then tell you how much antigen is in your sample.

The Enzyme ImmunoAssay (EIA) is a synonym for the ELISA.

Application of ELISA

Because the ELISA can be performed to evaluate either the presence of antigen or the presence of antibody in a sample, it is a useful tool both for determining serum antibody concentrations (such as with the HIV test[1] or West Nile Virus) and also for detecting the presence of antigen. It has also found applications in the food industry in detecting potential food allergens such as milk, peanuts, walnuts, almonds, and eggs.
Methods

The steps of the general, "indirect," ELISA for determining serum antibody concentrations are:
Apply a sample of known antigen to a surface, often the well of a
microtiter plate. The antigen is fixed to the surface to render it immobile. Simple adsorption of the protein to the plastic surface is usually sufficient. These samples of known antigen concentrations will constitute a standard curve used to calculate antigen concentrations of unknown samples. Note that the antigen itself may be an antibody.

The plate wells or other surface are then coated with serum samples of unknown antigen concentration, diluted into the same buffer used for the antigen standards. Since antigen immobilization in this step is due to non-specific adsorption, it is important for the total protein concentration to be similar to that of the antigen standards.

A concentrated solution of non-interacting protein, such as Bovine Serum Albumin (BSA) or casein, is added to all plate wells. This step is known as blocking, because the serum proteins block non-specific adsorption of other proteins to the plate.

The plate is washed, and a detection antibody specific to the antigen of interest is applied to all plate wells. This antibody will only bind to immobilized antigen on the well surface, not to other serum proteins or the blocking proteins.

The plate is washed to remove any unbound detection antibody. After this wash, only the antibody-antigen complexes remain attached to the well.

Secondary antibodies, which will bind to any remaining detection antibodies, are added to the wells. These secondary antibodies are conjugated to the substrate-specific enzyme. This step may be skipped if the detection antibody is conjugated to an enzyme.

Wash the plate, so that excess unbound enzyme-antibody conjugates are removed.
Apply a substrate which is converted by the enzyme to elicit a chromogenic or fluorogenic signal.
View/quantify the result using a spectrophotometer, spectrofluorometer, or other optical device.

The enzyme acts as an amplifier; even if only few enzyme-linked antibodies remain bound, the enzyme molecules will produce many signal molecules. A major disadvantage of the indirect ELISA is that the method of antigen immobilization is non-specific; any proteins in the sample will stick to the microtiter plate well, so small concentrations of analyte in serum must compete with other serum proteins when binding to the well surface. The sandwich ELISA provides a solution to this problem.

ELISA may be run in a qualitative or quantitative format. Qualitative results provide a simple positive or negative result for a sample. The cutoff between positive and negative is determined by the analyst and may be statistical. Two or three times the standard deviation is often used to distinguish positive and negative samples. In quantitative ELISA, the optical density or fluorescent units of the sample is interpolated into a standard curve, which is typically a serial dilution of the target.

Free cursors for MySpace at www.totallyfreecursors.com!