No.code

ادامه نوشته

Nobel Prize winners  

in full  Sir John Anthony Pople

Pople, Sir John A

born October 31, 1925, Burnham-on-Sea, Somerset, England
 
died March 15, 2004, Chicago, Illinois, U.S.

British mathematician and chemist who, with Walter Kohn, received the 1998 Nobel Prize for Chemistry for work on computational methodology in quantum chemistry. Pople's share of the prize recognized his development of computer-based methods of studying the quantum mechanics of molecules.

Pople was educated at the University of Cambridge and received a Ph.D. in mathematics from that institution in 1951. He was a fellow at Trinity College, Cambridge, from 1951 to 1958 and a lecturer in mathematics there from 1954 to 1958. He then headed the Basic Physics Division of the National Physical Laboratory (Middlesex, England) from 1958 to 1964. He was a professor at Carnegie-Mellon University (Pittsburgh, Pennsylvania) from 1964 to 1993, and he also taught at Northwestern University (Evanston, Illinois) from 1986 to 1993.

Pople's research centred on applying the complicated mathematics of quantum mechanics to study the chemical bonding between atoms within molecules. The use of quantum mechanics was problematic in this regard, because the necessary mathematical calculations for describing the probability states (wave functions) of individual electrons in molecular systems are so complex. However, the development in the 1960s of increasingly powerful computers that could perform such calculations opened up new opportunities in the field. In the late 1960s Pople designed a computer program, Gaussian, that could perform quantum-mechanical calculations to provide quick and accurate theoretical estimates of the properties of molecules and of their behaviour in chemical reactions. Gaussian eventually entered use in chemical laboratories throughout the world and became a basic tool in quantum-chemical studies. The computer models provided by this program have increased the understanding of such varied phenomena as interstellar matter and the effect of pollutants on the environment. These models also enable scientists to simulate the effectiveness of new drugs.

In addition to the Nobel Prize, Pople received numerous awards, and in 2003 he was knighted by Queen Elizabeth II.

Serum Free Light Chains Test

Serum Free Light Chains


Also known as: Free light chains; SFLC; FLC; Kappa and Lambda Free Light Chains; Quantitative Serum Free Light Chains with Ratio
Formal name: Light Chains, Free; Free Kappa/Lambda Ratio

Related tests:
Protein Electrophoresis; Quantitative Immunoglobulins

    * At a Glance
    * Test Sample
    * The Test

 

At a Glance

Why Get Tested?

To help detect, diagnose, and monitor light chain plasma cell disorders (dyscrasias) such as light chain multiple myeloma and primary amyloidosis, and to monitor the effectiveness of treatment

When to Get Tested?

When you have bone pain, fractures, anemia, kidney disease, and recurrent infections that your doctor suspects are due to a plasma cell disorder; when you are being treated for a light chain plasma cell disorder

Sample Required?

A blood sample drawn from a vein in your arm

Test Preparation Needed?

None
ادامه نوشته

ESR ,Test

ESR


Also known as: Sed rate; Sedimentation rate; Westergren sedimentation rate
Formal name: Erythrocyte sedimentation rate
Related tests: C-reactive protein (CRP); ANA; RF


At a Glance

Why Get Tested?

To determine the presence of one or more types of conditions, including infections, tumors, inflammation, and those leading to the breakdown or decreased function of tissue or organs (degenerative), and/or to monitor the progress of disease or effect of therapy

When to Get Tested?

When your doctor thinks that you might have a condition (see above) and to monitor the course of temporal arteritis, polymyalgia rheumatica, or rheumatoid arthritis

Sample Required?

A blood sample drawn from a vein in the arm

Test Preparation Needed?

None

The Test Sample

What is being tested?

Erythrocyte sedimentation rate (ESR) is an indirect measure of the degree of inflammation present in the body. It actually measures the rate of fall (sedimentation) of erythrocytes (red blood cells) in a sample of blood that has been placed into a tall, thin, vertical tube. Results are reported in millimeters of clear plasma that are present at the top portion of the tube after one hour.

Normally, red cells fall slowly, leaving little clear plasma. Increased blood levels of abnormal proteins or certain other proteins called acute phase reactants such as fibrinogen or immunoglobulins, which are increased in inflammation, cause the red blood cells to fall more rapidly, increasing the ESR. Acute phase reactants and the ESR may be increased in a number of different conditions, such as infections, autoimmune diseases, and cancer.

How is the sample collected for testing?

A blood sample is obtained by inserting a needle into a vein in the arm.

NOTE: If undergoing medical tests makes you or someone you care for anxious, embarrassed, or even difficult to manage, you might consider reading one or more of the following articles: Coping with Test Pain, Discomfort, and Anxiety, Tips on Blood Testing, Tips to Help Children through Their Medical Tests, and Tips to Help the Elderly through Their Medical Tests.

Another article, Follow That Sample, provides a glimpse at the collection and processing of a blood sample and throat culture.

Is any test preparation needed to ensure the quality of the sample?

No test preparation is needed.

The Test

How is it used?

The erythrocyte sedimentation rate (ESR) is an easy, inexpensive, nonspecific test that has been used for many years to help detect conditions associated with acute and chronic inflammation, including infections, cancers, and autoimmune diseases. ESR is said to be nonspecific because increased results do not tell the doctor exactly where the inflammation is in the body or what is causing it, and also because it can be affected by other conditions besides inflammation. For this reason, the ESR is typically used in conjunction with other tests.

ESR is helpful in diagnosing two specific inflammatory diseases, temporal arteritis and polymyalgia rheumatica. A high ESR is one of the main test results used to support the diagnosis. It is also used to monitor disease activity and response to therapy in both of these diseases.


When is it ordered?

An ESR may be ordered when a condition or disease is suspected of causing inflammation somewhere in the body. There are numerous inflammatory conditions that may be detected using this test. For example, it may be ordered when arthritis is suspected of causing inflammation and pain in the joints or when digestive symptoms are suspected to be caused by inflammatory bowel disease.

A physician may order an ESR test (along with other tests) to evaluate a patient who has symptoms that suggest polymyalgia rheumatica or temporal arteritis, such as headaches, neck or shoulder pain, pelvic pain, anemia, unexplained weight loss, and joint stiffness. There are many other conditions that can result in a temporary or sustained elevation in the ESR.

Before doing an extensive workup looking for disease, a doctor may want to repeat the ESR test after a period of several weeks or months. If a doctor already knows the patient has a disease like temporal arteritis (where changes in the ESR mirror those in the disease process), she may order the ESR at regular intervals to assist in monitoring the course of the disease. In the case of Hodgkin's disease, for example, a sustained elevation in ESR may be a predictor of an early relapse following chemotherapy.

ادامه نوشته

Breast Cancer ,test

Breast Cancer


Overview | Signs & Symptoms | Testing | Prevention | Early Detection | Treatment |

Breast cancers are malignant tumors that arise from the uncontrolled growth of cells in the breast. Occurring primarily in the ducts that transport milk to the nipple during lactation (breast feeding), and secondarily in the lobules, the glands that produce milk, breast cancers are distinct from cancers that may spread to the breasts from other parts of the body.

Each year, more women in the United States are diagnosed with breast cancer than with any other cancer, with the exception of skin cancer. The American Cancer Society (ACS) estimates that 178,480 new cases of invasive breast cancer were diagnosed in women in the U.S. in 2007 and that about 40,460 women died from the disease. Men can also develop the disease. ACS estimates that about 2,030 men were diagnosed with breast cancer in 2007, and about 450 men died. The rest of this article will focus on breast cancer in women. It is recommended that men who have been diagnosed with breast cancer speak to their doctor for information specific to them and see the ACS's web site All About Breast Cancer in Men.

Breast cancer can develop at any age, but the risk of developing it increases as women get older. While 5% to 10% of breast cancers are related to an inherited defect in one of two breast cancer genes (BRCA-1 or BRCA-2), the majority of cases develop for reasons we do not yet understand. As a general rule, those at higher risk of developing breast cancer include women whose close relatives have had the disease, women who have had a previous breast cancer in the other breast, women who have not had children, and women who had their first child after the age of 30. Each breast cancer will have its own characteristics. Some are slow growing; others can be aggressive. Some are sensitive to the hormones estrogen and progesterone, while others can over-express proteins. The cancer's characteristics can affect treatment choices and the potential for the cancer to recur.

Breast cancer may be divided into three stages, reflecting the extent to which the cancer has spread in the body.

  • Early stage breast cancer that is confined to its original location is known as noninvasive cancer. If the cancer is confined to the ducts, it is called ductal carcinoma in situ (DCIS), and if it is confined to the lobules, it is called lobular carcinoma in situ (LCIS). At this stage, the cancer cannot be felt as a lump in the breast, but DCIS can sometimes be detected by mammography.
  • Invasive stage breast cancer is characterized by a spread of the cancer beyond the ducts or lobules and into the surrounding areas of breast tissue. At this stage, the cancer may be detected through a breast self-exam, by a clinical breast exam performed by a health care professional, or by mammography.
  • Metastatic stage breast cancer is cancer that has spread (metastasized) to other areas of the body, including nearby lymph nodes. At this stage, treatment requires the combined effort of several specialists, including surgeons, oncologists, and radiologists.

Signs and Symptoms

It is important to remember that most lumps found in the breast are not cancerous but are benign and that the symptoms and signs associated with breast cancer may be due to other causes. Signs and symptoms include:

  • Mass or lump in the breast
  • Breast skin dimpling, reddening, or thickening
  • Nipple retraction
  • Breast swelling or pain
  • Nipple pain and/or discharge

  • Swelling or lumps in adjacent underarm lymph node
Testing

The goals of breast cancer testing are to identify genetic risk in high risk patients, detect and diagnose breast cancer in its earliest stages, determine how far it has spread, evaluate the cancer’s characteristics in order to guide treatment, monitor the effectiveness of treatment, and monitor the woman over time to detect and address any cancer recurrences. The table below summarizes various breast cancer tests. Detailed discussions of the tests follow the table.

Expand TableTests for Breast Cancer

TesT Diagnosis, Prognosis, Treatment Test Sample
Her 2/neu Patients with increased levels respond well to Herceptin and have a good prognosis. Tissue
Estrogen Receptor/
Progesterone Receptor
Increased levels suggest a good prognosis in response to anti-hormone therapy. Tissue
CA15-3/ CA27.29 Elevated blood levels of cancer antigens may indicate recurrence of cancer. Blood
BRACA-1 / BRACA-2 Genetic markers, if present, suggest an 80% likelihood of breast cancer occurrence. Blood
Oncotype DX May assist in determining risk of recurrence and predict who will benefit from hormone therapy or chemotherapy. Tissue
MammaPrint (Agendia) May assist in determining whether a patient is at risk for possible metastasis of cancer. Tissue
DNA Ploidy Determines rate of tumor cell growth (S phase) which, if elevated, suggests poor prognosis.  May require chemotherapy. Tissue
Ki-67 Antigen Elevated levels measure rapid tumor cell growth, thus suggests a poor prognosis. Tissue
Ductal lavage Presence of abnormal cytology (abnormal looking cells) may be a useful screening tool in identifying cancer. N/A
Mammogram Highly-sensitive digital X-ray technology that may detect small lumps that otherwise would not be detected through self-exam. N/A

Laboratory Tests

Laboratory tests for breast cancer can be broken down into groups, based on the purpose of testing:

Some tests for breast cancer are performed on the patient's blood; others are done on a sample of cells or the tumor tissue.

Cytology and surgical pathology

When a radiologist detects a suspicious area, such as calcifications or a non-palpable mass on a mammogram, or if a lump has been found during a clinical or self-exam (see Non-Laboratory Tests below), a doctor will frequently order a needle or surgical biopsy or a fine needle aspiration. In each case, a small sample of tissue is taken from the suspicious area of the breast so that a pathologist can examine the cells microscopically for signs of cancer. This pathological examination is done to determine whether the lesion is benign or malignant.

Malignant cells show changes or deviations from normal cells. Signs include changes in the size of cell nuclei and evidence of increased cell division. Pathologists can diagnose cancer based upon the observed changes, determine how abnormal the cells appear, and see whether there is a single type of change or a mixture of changes. These results help guide breast cancer treatment.

Needle aspirations are limited due to the small sample that is obtained. A tissue biopsy is needed to determine if a cancer is early stage or invasive. When a breast cancer is surgically removed (see Treatment), cells from the tumor and sometimes from adjacent tissue and lymph nodes are examined by the pathologist to help determine how far the cancer has spread.

Tests performed on tumor tissue

If the pathologist's diagnosis is breast cancer, there are several tests that may be performed on the cancer cells. The results of these tests provide a prognosis and help the oncologist (cancer specialist) guide the patient’s treatment. The most useful of these are HER-2/neu and estrogen and progesterone receptors.

  • Her-2/neu is an oncogene . It codes for a receptor for a particular growth factor that causes cells to grow. Normal epithelial cells contain two copies of the Her-2/neu gene and produce low levels of the Her-2 protein on the surface of their cells. In about 20-30% of invasive breast cancers, the Her-2/neu gene is amplified and its protein is over-expressed. These tumors are susceptible to treatment that specifically binds to this over-expressed protein. The chemotherapeutic agent Herceptin (Tastuzumab) blocks the protein receptors, inhibiting continued replication and tumor growth. Patients with amplified Her-2/neu gene respond well to Herceptin and have a good prognosis.
  • Estrogen and progesterone receptor status are important prognostic markers in breast cancer. The higher the percentage of overall cells positive, as well as the greater the intensity, the better the prognosis. Estrogen and/or progesterone receptor positivity in breast cancer cells indicates sensitivity to hormones. The patient may be a good candidate for anti-hormone therapy.

Blood tests

Blood tests may be used to help determine whether or not the tumor is responding to therapy or if it has recurred. Some may be ordered on women who are at a high risk of developing breast cancer to determine whether their risk has a genetic component.

  • CA15-3 (or CA 27.29) is a tumor marker that may be ordered at intervals after treatment to help monitor a patient for breast cancer recurrence. It is not used as a screen for breast cancer but can be used to follow it in some patients once it has been diagnosed.
  • BRCA-1 or BRCA-2 gene mutation – Women who are at high risk because of a personal or strong family history of early onset breast cancer or ovarian cancer can find out if they have a BRCA gene mutation. A mutation in either gene indicates that the patient is at significantly higher lifetime risk (up to 80%) for developing the disease. It is important to remember, however, that only about 5% to 10% of breast cancer cases occur in women with a BRCA gene mutation. Genetic counseling should be considered both before testing takes place and after receiving positive test results.

Other tests

There are several tests available, and many others being researched, that evaluate large numbers of genetic patterns in breast cancer tumor tissue. These tests are being investigated as predictive tests for the recurrence of breast cancer and therapy outcome. The American Society of Clinical Oncology (ASCO) mentioned several of them in its recent “2007 Update of Recommendations for the Use of Tumor Markers in Breast Cancer” and some have been included in the National Comprehensive Cancer Network’s 2008 Breast Cancer Treatment Guidelines. In most cases, the tests were deemed promising, but data to support their routine clinical use were still thought to be insufficient. Examples of tests being ordered by some doctors include:

  • Oncotype DX – ASCO indicates that this test, which measures 21 genes, can be used to predict risk of cancer recurrence in patients who have been newly diagnosed with early breast cancer, have cancer-negative lymph nodes, have estrogen receptor positive tumors, and are taking the drug tamoxifen.
  • MammaPrint test – in use in Europe and recently cleared by the FDA for use in the U.S. This test evaluates gene activity patterns in 70 tumor genes. It may be used to help predict whether a breast cancer will recur and/or metastasize in women who have early stage cancer, are under the age of 61, and have cancer-negative lymph nodes.
  • There are additional tests that may be used in some breast cancer cases, such as DNA ploidy, Ki-67, or other proliferation markers. However, most authorities believe that HER-2/neu, estrogen and progesterone receptor status are the most important to evaluate first. The other tests do not have therapeutic implications and, when compared with grade and stage of the disease, are not independently significant with respect to prognosis. Some medical centers use these tests for additional information in evaluating patients, making it important to discuss the value of these tests with your cancer management team.

    Non-Laboratory Tests

    In addition to laboratory tests, there are non-laboratory tests that are equally important. These include:

    • Mammography is widely recommended as a screening tool. A screening mammogram uses X-ray technology to produce an image of the breasts and can reveal breast cancer up to two years before a lump is large enough to be felt during a clinical or self-exam.
    • Newer technologies, such as digital mammography and computer-aided detection, may yield a clearer image than a mammography in some cases. In particular, younger women, whose breast tissue is often too dense to show tumors clearly on the X-ray film used for a standard mammogram, may benefit from ultrasound exams or magnetic resonance imaging (MRI).
    • Ductal lavage may also be used as a screening tool, particularly for women at high risk for developing the disease. In this procedure, a doctor extracts cells via a tiny tube inserted through the patient's nipple. Those cells are then examined for signs of cancer.

    For more information on mammography and other imaging technologies, go to the National Cancer Institute’s website or the College of American Patholgists website.


    Prevention

    For most women, a healthy lifestyle that includes regular exercise, maintaining a healthy body weight, and avoiding alcohol is the best way to minimize the risk of developing breast cancer. Research studies continue to identify factors that are associated with an increased or decreased risk of developing the disease, but there is no single set of actions that will cause or prevent breast cancer. Women should work with their doctor to determine their personal risk factors and how to best address them.

    Women who are at high risk of developing breast cancer may be able to take the drug tamoxifen to reduce their risk. However, tamoxifen can increase the risk of developing blood clots, endometrial (uterine) cancer, and possibly cardiovascular disease, so the decision to take the medication needs to be weighed carefully. Your doctor can help you to assess the risks and benefits of such treatment.

    For those women who have the gene mutation (BRCA-1 and BRCA-2) frequently associated with breast cancer, prophylactic mastectomy is an option. Women electing this option choose to have both breasts removed before developing cancer rather than run the high risk of developing the disease later in their lifetime. Studies have shown that such surgery can reduce the risk of developing breast cancer by approximately 90%. Other women elect to have a prophylactic mastectomy on their cancer-free breast after developing cancer in the other breast. Your doctor can best advise you if you are considering prophylactic mastectomy.


    Early Detection

    Breast cancer that is detected and treated in its earliest stages can be cured over 90% of the time. The primary early detection tools are breast self-exams, clinical breast exams, and mammograms.

    CAP Reminder ServiceThe American Cancer Society (ACS) recommends that:

    • women age 20 and older do a breast self-exam every month,
    • women under the age of 39 have a clinical breast exam by a health care professional as part of their regular physical at least every three years, and
    • women age 40 and over also have a yearly mammogram.

    Women with certain risk factors may be advised to begin screening at an earlier age and may be advised to be screened more frequently.

    The U.S. Preventive Services Task Force updated its recommendations on the use of these screening methods in November of 2009. Based on their scientific review, they no longer recommend screening mammograms for women under the age of 50 and they recommend routine mammography every 2 years for women ages 50-74.

    Your doctor can help you to assess your risk of developing breast cancer and can recommend how often screening should be done in your case.

    ادامه نوشته

    ASO test

    ASO


    Also known as: ASLO
    Formal name: Antistreptolysin O titer
    Related tests: Strep throat, Anti-DNase-B

    At a Glance

    Why Get Tested?

    To help determine whether a person has had a recent Group A streptococcal infection; to help diagnose post-streptococcal sequelae of rheumatic fever and glomerulonephritis

    When to Get Tested?

    When someone has a fever, chest pain, fatigue, shortness of breath, edema, or other symptoms associated with rheumatic fever or glomerulonephritis, especially when a person recently had a sore throat but no rapid test or culture was done to confirm a Group A streptococcal infection

    Sample Required?

    A blood sample drawn from a vein in your arm

    Test Preparation Needed?

    None


    The Test Sample

    What is being tested?

    This test measures the amount of antistreptolysin O (ASO) in the blood. ASO is an antibody targeted against streptolysin O, a toxin produced by Group A streptococcus bacteria. ASO and anti-DNase B are the most common of several antibodies that are produced by the body's immune system in response to a Group A streptococcal infection.

    Group A streptococcus (Streptococcus pyogenes), is the bacterium responsible for causing strep throat. In most cases, strep infections are identified, treated with antibiotics, and the infections resolve. When a strep infection does not cause identifiable symptoms, goes untreated, or is treated ineffectively, however, post-streptococcal complications (sequelae), namely rheumatic fever and glomerulonephritis, can sometimes develop, especially in young children. These secondary conditions have become much less prevalent in the U.S. because of routine strep testing, but they still do occur. They cause symptoms such as fever, fatigue, shortness of breath, heart palpitations, decreased urine output, and bloody urine. They can damage the heart and/or cause acute kidney dysfunction, leg swelling (edema), and high blood pressure (hypertension). Because these symptoms may also be seen with other conditions, the ASO test can be used to help determine if they are due to a recent Group A strep infection.

    For more information on rheumatic fever and glomerulonephritis, see the Links tab.

    How is the sample collected for testing?

    A blood sample is obtained by inserting a needle into a vein in the arm.

    NOTE: If undergoing medical tests makes you or someone you care for anxious, embarrassed, or even difficult to manage, you might consider reading one or more of the following articles: Coping with Test Pain, Discomfort, and Anxiety, Tips on Blood Testing, Tips to Help Children through Their Medical Tests, and Tips to Help the Elderly through Their Medical Tests.

    Another article, Follow That Sample, provides a glimpse at the collection and processing of a blood sample and throat culture.

    Is any test preparation needed to ensure the quality of the sample?

    No test preparation is needed.

    The Test

    How is it used?

    The ASO test is primarily ordered by itself or along with an anti-DNase B to help determine whether a person has had a recent streptococcal infection. In most cases, strep infections are identified and treated with antibiotics and the infections resolve. In cases where they do not cause identifiable symptoms and/or go untreated, however, post-streptococcal complications (sequelae), namely rheumatic fever and glomerulonephritis, can develop in some patients, especially young children. The test, therefore, is ordered if a person presents with symptoms suggesting rheumatic fever or glomerulonephritis and has had a recent history of sore throat or a confirmed streptococcal infection. Since the incidence of post-streptococcal complications has dropped in the U.S., so has the use of the ASO test.


    When is it ordered?

    The ASO test is ordered when a person has symptoms that the doctor suspects may be due to an illness caused by a previous streptococcal infection. It is ordered when the symptoms emerge, usually in the weeks following a sore throat or skin infection. The test may be ordered twice over a period of 10-14 days to determine if the antibody level is rising, falling, or remaining the same.

    Some symptoms of rheumatic fever may include:

    • Fever
    • Joint swelling and pain in more than one joint, especially in the ankles, knees, elbows and wrists, sometimes moving from one joint to another
    • Small, painless nodules under the skin
    • Rapid, jerky movements (Sydenham's chorea)
    • Skin rash
    • Sometimes the heart can become inflamed (carditis); this may not produce any symptoms but also may lead to shortness of breath, heart palpitations, or chest pain

    Some symptoms of glomerulonephritis may include:

    • Fatigue, decreased energy
    • Decreased urine output
    • Bloody urine
    • Rash
    • Joint pain
    • Swelling (edema)
    • High blood pressure
    However, these symptoms can be seen in other conditions.

    The test may be performed twice, with samples collected about two weeks apart, for acute and convalescent ASO titers. This is done to determine if the antibody level is rising, falling, or remaining the same.


    What does the test result mean?

    ASO antibodies are produced about a week to a month after an initial strep infection. ASO levels peak at about 4 to 6 weeks after the illness and then taper off but may remain at detectible levels for several months after the strep infection has resolved.

    If the test is negative or if ASO is present in very low concentrations, then the person tested most likely has not had a recent strep infection. This is especially true if a sample taken 10 to 14 days later is also negative or low level and if an anti-DNase B test is also negative. A small percentage of those with a post-streptococcal complication will not have an elevated ASO.

    If the ASO level is high or is rising, then it is likely that a recent strep infection has occurred. ASO levels that are initially high and then decline suggest that an infection has occurred and may be resolving.

    The ASO test does not predict if complications will occur following a streptococcal infection, nor do they predict the type or severity of the disease. If symptoms of rheumatic fever or glomerulonephritis are present, an elevated ASO level may be used to help confirm the diagnosis.


    Is there anything else I should know?

    Some antibiotics and corticosteroids may decrease ASO antibody levels.

    Common Questions

    1.  Can ASO be used to diagnose strep throat?

    A throat culture or a rapid strep test is the best method to diagnose streptococcal pharyngitis. It is important that strep throat be promptly identified and treated to avoid complications and to avoid passing the infection on to others. Since detectible ASO levels do not appear for at least a week, they are not used to diagnose an acute infection.


    2.  If I am diagnosed with strep, will an ASO always be performed?

    No. In general, the ASO test is only performed when someone has symptoms suggesting that a post-streptococcal complication may have developed and no culture was done to confirm a previous infection with this bacterium. Most people do not experience these complications, so the ASO test is not routinely done.


    Ask a Laboratory Scientist

    This form enables you to ask specific questions about your tests. Your questions will be answered by a laboratory scientist as part of a voluntary service provided by one of our partners, the American Society for Clinical Laboratory Science. Thank you.

    Form temporarily unavailable

    Due to a dramatic increase in the number of questions submitted to the volunteer laboratory scientists who respond to our users, we have had to limit the number of questions that can be submitted each day. Unfortunately, we have reached that limit today and are unable to accept your inquiry now. We understand that your questions are vital to your health and peace of mind, and recommend instead that you speak with your doctor or another healthcare professional. We apologize for this inconvenience.

    This was not an easy step for us to take, as the volunteers on the response team are dedicated to the work they do and are often inspired by the help they can provide. We are actively seeking to expand our capability so that we can again accept and answer all user questions. We will accept and respond to the same limited number of questions tomorrow, but expect to resume the service, 24/7, as soon as possible.


    Down Syndrome

    Down Syndrome

    Overview | Tests | Treatmen

    What is it?
    Down syndrome (DS) is a congenital condition caused by an extra copy or piece of chromosome 21 in all or most of the affected person’s cells. It is a group of signs, symptoms, birth defects, and complications that arise from an error in cell division that occurs before, or shortly after, conception. This error has a widespread effect on the physical and mental development of the affected person.

    Chromosomes hold the body’s genetic blueprint. Most cells in the body contain 22 pairs of chromosomes and a 23rd set of either XX (in females) or XY (in males) for a total of 46 chromosomes. Reproductive cells, eggs and sperm, contain a single set of 23 chromosomes that combine when an egg is fertilized to form a new set of 46 in the fetus (half from each parent). In most cases of Down syndrome, random chance leads to the insertion of an extra copy of chromosome 21 in either the egg or sperm. This extra copy becomes part of the fertilized egg and is replicated in all of the embryo’s cells. This form of Down syndrome is called trisomy 21, and it accounts for about 95% of DS cases.

    The error may also occur after conception, in the developing embryo. As the fetus grows, some cells may have 47 chromosomes, while others have 46. This form of Down syndrome is called mosaic trisomy 21.

    In another rare form of Down syndrome called translocation trisomy 21, a piece of chromosome 21 adheres to another chromosome before or at conception. Even though the fetus has 46 chromosomes, it still has an extra portion of chromosome 21 in its cells.

    All individuals with additional chromosome 21 genetic material, regardless of the cause, will develop some of the features of Down syndrome.

    About 1 in 800 babies in the United States are born with Down syndrome. The risk of having an affected baby increases significantly as a woman ages. According to the National Institute of Child Health & Human Development, the risk increases from less than 1 in 1,000 in women under 30 to 1 in 400 by age 35 and to 1 in 12 by the time a woman is 49 years old. However, since younger women have the greatest number of babies, the majority of those with Down syndrome, about 75%, will be born to women under 35.

    There are many characteristic signs and symptoms associated with Down syndrome. Not every child will have every one and the degree to which they are affected may vary greatly. Signs and symptoms include:

    • A small head with small, low-set ears
    • Slanting eyes, a broad flat face, and a short nose
    • A small mouth and protruding tongue
    • Short, small but broad hands and feet and a single crease across the palm
    • Short fingers and an abnormal bone in the 5th (pinky) finger
    • Poor muscle tone (hypotonia)
    • Hyperflexible joints
    • Atlantoaxial instability (a malformation of the top of the spine)
    • Mild to moderate mental retardation

    Complications of Down syndrome vary greatly. Some may be present at birth, some may arise during childhood, others during adulthood, and others may never be experienced. Doctors and family members must be aware of these potential complications as patients may or may not be able to clearly communicate their symptoms and/or may express them in unexpected ways.

    Complications can include:

    • Celiac disease
    • Dental disease
    • Developmental delays
    • Diabetes
    • Food sensitivities and constipation
    • Gastrointestinal abnormalities and obstructions (5 to 10%)
    • Hearing loss (75%)
    • Heart defects and disease (close to 50%)
    • Increased incidence of respiratory and ear infections, colds, bronchitis, tonsillitis, and pneumonia
    • Increased risk of acute leukemia
    • Premature aging, loss of cognitive abilities, and Alzheimer’s type symptoms in patients under 40 years of age
    • Seizure disorders
    • Sleep apnea (50 to 75%)
    • Spinal cord compression
    • Thyroid disease (about 15%)
    • Visual problems, including cataracts (about 60%)

    Tests

    The goal of testing is to screen for Down syndrome, diagnose it, detect any malformations that will require medical interventions shortly after birth, and to monitor the person who has Down syndrome for complications throughout his or her life. Testing is usually a combination of laboratory and non-laboratory evaluations.

    Laboratory Tests
    Screening and diagnostic tests may be done during a woman’s pregnancy, in either the first or the second trimester. Screening tests are not diagnostic; they indicate an increased likelihood of the fetus carrying Down syndrome. ACOG (The American College of Obstetricians and Gynecologists) has recently recommended that all pregnant women be offered DS screening tests.

    Prenatal diagnostic tests may be performed when screening tests are abnormal. They involve taking samples of the fluid or tissues surrounding the baby and evaluating them for an additional copy or portion of chromosome 21. A very small risk of infection and miscarriage are associated with these diagnostic tests.

    Diagnostic testing performed after birth involves taking a sample of blood from the baby and evaluating his or her chromosomes. Tests that detect the complications often seen in those with Down syndrome are used to help diagnose conditions that arise and to monitor the effectiveness of treatment. Some of the complications, such as congenital heart defects and gastrointestinal obstructions, may be present at birth. Others such as hearing loss, vision disorders, leukemia, and thyroid disease may develop at any time during the patient’s life.

    Testing includes:

    Prenatal screening

    • 1st trimester screen - nuchal translucency (non-laboratory test, see below), pregnancy-associated plasma protein A (PAPP-A), and free beta or total hCG (human chorionic gonadotropin), usually performed between 10 weeks, 4 days and 13 weeks, 6 days gestation
    • 2nd trimester screen (triple/quad screen) - alpha feto-protein (AFP), chorionic gonadotropin (hCG), and unconjugated estriol (uE3); quad screen adds inhibin A test; performed at 15 to 20 weeks gestation

    Prenatal diagnosis

    Diagnosis after birth

    • Chromosomal karyotype – cells are grown from a blood sample and chromosomes are evaluated for an extra copy of chromosome 21. The presence and type of Down syndrome can be determined from this test.

    Non-Laboratory Tests

    Prenatal

    • Nuchal translucency – an ultrasound measurement of the space between the fetal spine and the skin at the back of the neck; not diagnostic, but in a fetus with Down syndrome, there may be an increased amount of space. This test requires a person with specialized training to perform and interpret.
    • 2nd semester high-resolution ultrasound – can help monitor fetal development and detect malformations such as cardiac and gastrointestinal defects

    At or soon after birth

    • Echocardiogram and chest x-rays (to help detect cardiac defects)
    • Ultrasound and/or MRI (magnetic resonance imaging) to evaluate any suspected congenital conditions such as cardiac defects and gastrointestinal obstructions
    • Hearing evaluation

    Treatments

    Currently there is no way to prevent or cure Down syndrome. Prenatal screening and diagnosis is performed to detect the condition in the fetus and to allow the pregnant woman and her family to make informed choices. Early diagnosis allows the family and doctor to work together to monitor the baby and to prepare for complications that may require attention shortly after birth. Medical treatments may include surgical interventions, such as repairing cardiac defects and gastrointestinal obstructions, and starting medications for conditions such as thyroid disease.

    In individuals with Down syndrome, careful monitoring, prompt attention to acute and chronic conditions that arise, and “early intervention” to maximize the potential of the individual are important. The symptoms, signs, complications, and abilities of people with Down syndrome will vary widely. It is not possible to determine early in a child’s life what they will be able to learn, do, and accomplish. They should be given encouragement and stimulation from an early age, given a healthy diet, and engaged in regular physical activities to maintain muscle strength. Families should work closely with their doctors and other specialists to develop life, monitoring, and treatment plans that meet the unique needs of those affected.

    There are national, state, and local “early intervention” programs and resources that can help children with Down syndrome develop their physical, communication, and cognitive skills. Many children will be able to join regular classes in schools, participate in sports, and as adults hold jobs and live semi-independent lives. Most will be able to live relatively normal and healthy lives. The average lifespan of those with Down syndrome has increased in recent years with most patients living to their mid 50’s, and many into their 60’s and 70’s.

    Anemia Total

    Anemia
      Overview | Iron Deficiency | Pernicious | Aplastic | Hemolytic | Chronic Diseases |


    Pernicious Anemia and Other B Vitamin Deficiencies
    Pernicious anemia is a condition in which the body does not make enough of a substance called “intrinsic factor”. Intrinsic factor is a protein produced by parietal cells in the stomach that binds to vitamin B12 and allows it to be absorbed from the small intestine. Vitamin B12 is important in the production of red blood cells (RBCs). Without enough intrinsic factor, the body cannot absorb vitamin B12 from the diet and cannot produce enough normal RBCs, leading to anemia. In addition to lack of intrinsic factor, other causes of vitamin B12 deficiency and anemia include dietary deficiency and conditions that affect absorption of the vitamin from the small intestine such as surgery, certain drugs, digestive disorders (Celiac disease, Crohn’s disease), and infections. Of these, pernicious anemia is the most common cause of symptoms.

    Vitamin B12 deficiency can result in general symptoms of anemia as well as nerve problems. These may include:

        * weakness or fatigue
        * lack of energy
        * numbness and tingling that start first in the hands and feet

    Additional symptoms may include muscle weakness, slow reflexes, loss of balance and unsteady walking. Severe cases can lead to confusion, memory loss, depression, and/or dementia.

    Folic acid is another B vitamin, and deficiency in this vitamin may also lead to anemia. Folic acid, also known as folate, is found in many foods, especially in green, leafy vegetables. Folic acid is added to most grain products in the United States so that deficiency in folic acid is rarely seen in the U.S. today. Folic acid is needed during pregnancy for normal development of the brain and spinal cord. It is important for women considering pregnancy to take folate supplements before they get pregnant and during pregnancy to make sure they are not folate deficient. Folate deficiency early in pregnancy can cause problems in the development of the brain and spinal cord of the baby.

    Anemias resulting from vitamin B12 or folate deficiency are sometimes referred to as “macrocytic” or “megaloblastic” anemia because red blood cells are larger than normal. A lack of these vitamins does not allow RBCs to grow and then divide as they normally would during development, which leads to their large size. This leads to a reduced number of abnormally large RBCs and anemia.

    Laboratory Tests
    Symptoms of anemia will usually be investigated initially with a complete blood count (CBC) and differential. In pernicious anemia or vitamin B12 deficiency, these usually reveal:

        * A low hemoglobin level
        * For red cell indices, the mean corpuscular volume (MCV), which is the average size of RBCs, is often high.
        * A blood smear will reveal red blood cells that are abnormally large.

    Folic acid deficiency can cause the same pattern of changes in hemoglobin and red cell size as vitamin B12 deficiency. If the cause of your anemia is thought to be due to pernicious anemia or dietary deficiency of B12 or folate, additional tests are usually done to make the diagnosis. Some of these include:

        * Vitamin B12 level—blood level may be low when deficient in B12
        * Folic acid level—blood level may be low if deficient in this B vitamin
        * Methylmalonic acid (MMA)—may be high with vitamin B deficiency
        * Homocysteine—may be high with either folate or vitamin B deficiency
        * Reticulocyte count—is usually low
        * Antibodies to intrinsic factor or parietal cell antibodies—may be present in pernicious anemia

    Sometimes a bone marrow aspiration may be performed. This may reveal larger than normal sizes in the cells that eventually mature and become RBCs (precursors).

    Treatment in these conditions involves supplementation with the vitamin that is deficient. If the cause of deficiency is the inability to absorb the vitamin from the digestive tract, then the vitamin may be given as injections. Treatment of underlying causes such as a digestive disorder or infection may help to resolve the anemia.

    For more on this, see the article on Vitamin B12 and Folate Deficiency.

    Aplastic Anemias
    Aplastic anemia is a rare disease, caused by a decrease in the number of all types of blood cells produced by the bone marrow. Normally, the bone marrow produces a sufficient number of new red blood cells (RBCs), white blood cells (WBCs), and platelets for normal body function. Each type of cell enters the blood stream, circulates, and then dies within a certain time frame. For example, the normal lifespan of RBCs is about 120 days. If the bone marrow is not able to produce enough blood cells to replace those that die, a number of symptoms, including those due to anemia, may result.

    Symptoms of aplastic anemia can appear abruptly or can develop more slowly. Some general symptoms that are common to different types of anemia may appear first and are due to the decrease in number of RBCs. These include:

    • feeling of tiredness, fatigue
    • lack of energy

    Some additional signs and symptoms that occur with aplastic anemia include those due to decreased platelets:

    • prolonged bleeding
    • frequent nosebleeds
    • bleeding gums
    • easy bruising

    and due to a low WBC count:

    • increased number and severity of infections

    Causes of aplastic anemia usually have to do with damage to the stem cells in the bone marrow that are responsible for blood cell production. Some factors that may be involved with bone marrow damage and that can lead to aplastic anemia include:

    Rarely, aplastic anemia is due to an inherited (genetic) disorder such as Fanconi anemia. For more on this condition, see the Faconi Anemia Research web site.

    Laboratory Tests
    The initial test for anemia, the complete blood count (CBC), may reveal many abnormal results.

    • Hemoglobin and/or hematocrit may be low.
    • RBC and WBC counts are low.
    • Platelet count is low.
    • Red blood cell indices are usually normal.
    • The differential white blood count shows a decrease in most types of cells but not lymphocytes.

    Some additional tests that may be performed to help determine the type and cause of anemia include:

    A physical examination or complete medical history may reveal possible causes for aplastic anemia such as exposure to toxins or certain drugs (for example, chloramphenicol) or prior treatment for cancer. Some cases of aplastic anemia are temporary while others have lasting damage to the bone marrow. Treatment depends on the cause. Reducing or eliminating exposure to certain toxins or drugs may help resolve the condition. Medications may be given to stimulate bone marrow production, to treat infections, or to suppress the immune system in cases of autoimmune disorders. Blood transfusions and a bone marrow transplant may be needed in severe cases.

    Hemolytic Anemias
    Rarely, anemia is due to problems that cause the red blood cells (RBCs) to die or be destroyed prematurely. Normally, red cells live in the blood for about 4 months. In hemolytic anemia, this time is shortened, sometimes to only a few days. The bone marrow is not able to produce new RBCs quickly enough to replace those that have been destroyed, leading to a decreased number of RBCs in the blood, which in turn leads to a diminished capacity to supply oxygen to tissues throughout the body. This results in the typical symptoms of anemia including:
    • weakness and/or fatigue
    • lack of energy

    Depending on the cause, different forms of hemolytic anemia can be chronic, developing and lasting over a long period or lifetime, or may be acutesigns and symptoms. The various forms can have a wide range of signs and symptoms. See the discussions of the various types below for more on this.

    The different causes of hemolytic anemia fall into two main categories:

    • Inherited forms in which a gene or genes are passed from one generation to the next that result in abnormal RBCs or hemoglobin
    • Acquired forms in which some factor other than inherited results in the early destruction of RBCs

    Inherited Hemolytic Anemia
    Two of the most common causes of inherited hemolytic anemia are sickle cell anemia and thalassemia:

    Sickle cell anemia can cause minor difficulties as the “trait” (when you carry one mutated gene from one of your parents), but severe clinical problems as the “disease” (when you carry two mutated genes, one from each of your parents). The red blood cells are misshapen, unstable (leading to hemolysis) and can block blood vessels, causing pain and anemia. Screening is usually done on newborns – particularly those of African descent. Sometimes screening is done prenatally on a sample of amniotic fluid. Follow-up tests for hemoglobin variants may be performed to confirm a diagnosis. Treatment is usually based on the type, frequency and severity of symptoms.

    Thalassemia is a hereditary abnormality of hemoglobin production and results in small red blood cells that resemble those seen in iron deficiency. In its most severe form, the red cells have a shortened life span. In milder forms, anemia is usually mild or absent, and the disease may be detected by finding small blood cells on a routine CBC. This genetic disease is found frequently in people of Mediterranean, African, and Asian heritage. The defect in production may involve one of two components of hemoglobin called the alpha and beta protein chains. The disease is defined as alpha thalassemia or beta thalassemia accordingly. The "beta minor" form (sometimes called beta thal trait, as with sickle cell) occurs when a person inherits half normal genes and half beta thalassemia genes. It causes a mild anemia and no symptoms. The "beta major" form (due to inheriting two beta thalassemia genes and also called Cooley’s anemia) is more severe and may result in growth problems, jaundice, and severe anemia.

    Other less common types of inherited forms of hemolytic anemia include:

    • Hereditary spherocytosis—results in abnormally shaped RBCs that may be seen on a blood smear
    • Hereditary elliptocytosis—another cause of abnormally shaped RBCs seen on a blood smear
    • Glucose-6-phospate dehydrogenase (G6PD) deficiency—G6PD is an enzyme that is necessary for RBC survival. Its deficiency may be diagnosed with a test for its activity.
    • Pyruvate kinase deficiency—Pyruvate kinase is another enzyme important for RBC survival and its deficiency may also be diagnosed with a test for its activity.

    Laboratory Tests
    Since some of these inherited forms may have mild symptoms, they may first be detected on a routine CBC and blood smear, which can reveal various abnormal results that give clues as to the cause. Follow-up tests are then usually performed to make a diagnosis. Some of these include:

    • Tests for hemoglobin variants such as hemoglobin electrophoresis
    • DNA analysis—not routinely done but can be used to help diagnose hemoglobin variants, thalassemia, and to determine carrier status.
    • G6PD test—to detect deficiency in this enzyme
    • Osmotic fragility test—detects RBCs that are more fragile than normal, which may be found in hereditary spherocytosis.

    These genetic disorders cannot be cured but often the symptoms resulting from the anemia may be alleviated with treatment as necessary.

    Acquired Hemolytic Anemia
    Some of the conditions or factors involved in acquired forms of hemolytic anemia include:

    • Autoimmune disorders—a condition in which the body produces antibodies against its own red blood cells. It is not understood why this may happen.
    • Transfusion reaction—result of blood donor-recipient incompatibility. This occurs very rarely but when it does, it can have some serious complications. For more on this, see the Blood Banking article.
    • Mother-baby blood group incompatibility—may result in hemolytic disease of the newborn.
    • Drugs—certain drugs such as penicillin can trigger the body to produce antibodies directed against RBCs or cause the direct destruction of RBCs.
    • Physical destruction of RBCs by, for example, an artificial heart valve or cardiac bypass machine used during open-heart surgery
    • Paroxysmal Nocturnal Hemoglobinurina (PNH)—a rare condition in which the different types of blood cells including RBCs, WBCs and platelets are abnormal. Because the RBCs are defective, they are destroyed by the body earlier than the normal lifespan. As the name suggests, people with this disorder can have acute, recurring episodes in which many RBCs are destroyed. This disease occurs due to a change or mutation in a gene called PIGA in the stem cells that make blood. Though it is a genetic disorder, it is not passed from one generation to the next (it is not an inherited condition). Patients will often pass dark urine due to the hemoglobin released by destroyed RBCs being cleared from the body by the kidneys. This is most noticeable first thing in the morning when urine is most concentrated. Episodes are thought to be brought on when the body is under stress during illnesses or after physical exertion. For more on this, see the Genetic Home Reference webpage.

    These types of hemolytic anemias are often first identified by signs and symptoms, during physical examination, and by medical history. A medical history can reveal, for example, a recent transfusion, treatment with penicillin, or cardiac surgery. A CBC and/or blood smear may show various abnormal results. Depending on those findings, additional follow-up tests may be performed. Some of these may include:

    Treatments for hemolytic anemia are as varied as the causes. However, the goals are the same: to treat the underlying cause of the anemia, to decrease or stop the destruction of RBCs, and to increase the RBC count and/or hemoglobin level to alleviate symptoms. This may involve, for example:

    • Drugs used to decrease production of autoantibodies that destroy RBCs
    • Blood transfusions to increase the number of healthy RBCs
    • Bone marrow transplant—to increase production of normal RBCs
    • Avoiding triggers that cause the anemia such as the cold in some forms of autoimmune hemolytic anemia or fava beans for those with G6PD deficiency.


    Anemia Caused by Chronic Diseases
    Chronic (long-term) illnesses can cause anemia. Often, anemia caused by chronic diseases goes undetected until a routine test such as a complete blood count reveals abnormal results. Several follow-up tests may be used to determine the underlying cause. There are many chronic conditions and diseases that can result in anemia. Some examples of these include:
    • Kidney disease—Red blood cells are produced by the bone marrow in response to a hormone called erythropoietin, made primarily by the kidneys. Chronic kidney disease can cause anemia resulting from too little production of this hormone; the anemia can be treated by giving erythropoietin injections.
    • Inflammatory conditions—Whenever there are chronic diseases that stimulate the body’s inflammatory system, the ability of the bone marrow to respond to erythropoietin is decreased. For example, rheumatoid arthritis (a severe form of joint disease caused by the body attacking its own joints, termed an autoimmune disease) can cause anemia by this mechanism.
    • Other diseases that can produce anemia in the same way as inflammatory conditions include chronic infections (such as with HIV or tuberculosis, TB), cancer, and cirrhosis.

    A number of tests may be used as follow up to abnormal results of initial tests such as a complete blood count (CBC) and blood smear to determine the underlying cause of chronic anemia. Some of these may include:

    Treatment of anemia due to chronic conditions usually involves determining and/or resolving the underlying disease. Blood transfusions may be used to treat the condition in the short term.


    Thyroid Diseases

    Thyroid Diseases


      Overview | Diseases | Tests | Treatment | Related Pages

    What is it?
    The thyroid is a small, butterfly-shaped gland located just below the Adam's apple. This gland plays a very important role in controlling your body's metabolism, that is, the rate at which your body uses energy. It does this by producing thyroid hormones (primarily thyroxine, or T4, and triiodothyronine, or T3), chemicals that travel through your blood to every part of your body. These thyroid hormones tell the cells in your body how fast to use energy and create proteins. The thyroid gland also makes calcitonin, a hormone that helps to regulate calcium levels in the blood by inhibiting the breakdown (reabsorption) of bone and increasing calcium excretion from the kidneys.

    The body has an elaborate feedback system to control the amount of T4 and T3 in the blood. When blood levels decrease, the hypothalamus releases thyrotopin-releasing hormone, which in turn causes the pituitary gland (a tiny gland located below the hypothalamus and almost in the center of the head) to release thyroid-stimulating hormone (TSH). TSH stimulates the thyroid gland to produce and secrete thyroid hormones. When there is sufficient thyroid hormone in the blood, the amount of TSH decreases to maintain constant amounts of thyroid hormones, T4 and T3.

    Inside the thyroid, most of the T4 is stored bound to a protein called thyroglobulin. When the need arises, the thyroid gland creates more T4 and/or releases some of what is stored. In the bloodstream, most T4 is bound to a protein called thyroxine-binding globulin (TBG) and is relatively inactive. T4 is converted to T3 by the liver and in many other tissues. T3 is primarily responsible for controlling the rate of body functions.

    Thyroid diseases are primarily conditions that affect the amount of thyroid hormones being produced. Some create too few, leading to hypothyroidism and a slowing of body functions. This hypothyroidism causes symptoms such as weight gain, dry skin, constipation, cold intolerance, puffy skin, hair loss, fatigue, and menstrual irregularity in women. Severe untreated hypothyroidism, called myxedema, can lead to heart failure, seizures, and coma. In children, hypothyroidism can stunt growth and delay sexual development. In infants, it can cause mental retardation. For this reason, hypothyroidism testing is performed in the United States as part of newborn blood screening programs since early detection and treatment can minimize long-term damage.

    If a thyroid disorder creates excessive amounts of thyroid hormone, the result is hyperthyroidism and the acceleration of body functions. This can lead to symptoms such as increased heart rate, anxiety, weight loss, difficulty sleeping, tremors in the hands, weakness, and sometimes diarrhea. There may be puffiness around the eyes, dryness, irritation, and, in some cases, bulging of the eyes. The affected person may experience light sensitivity and visual disturbances. Because the eyes may not move normally, the person may appear to be staring.

    Common Thyroid Diseases

    About 20 million Americans have some form of thyroid disease. These are the most common:

    Graves’ Disease – This is the most common cause of hyperthyroidism. It is a chronic disorder in which the affected person’s immune system produces antibodies that attack the thyroid, causing inflammation, damage, and the production of excessive amounts of thyroid hormone.

    Hashimoto’s Thyroiditis – This is the most common cause of hypothyroidism in the United States. Like Graves’ disease, it is a chronic autoimmune condition related to the production of antibodies that target the thyroid and cause inflammation and damage. With Hashimoto’s thyroiditis, however, the body makes decreased amounts of thyroid hormone.

    Thyroid Cancer—Thyroid cancer is fairly uncommon, with only about 1500 deaths and 33,550 new cases diagnosed in 2007 in the U.S. There are four main types of thyroid cancers: papillary, follicular, anaplastic, and medullary cancer. About 60-70% of thyroid cancer cases are papillary. This type affects more women than men and is more common in younger people. About 15% of thyroid cancers are follicular, a more aggressive type of cancer that tends to occur in older women. Anaplastic cancer, also found in older women, accounts for about 5% of thyroid cancers and tends to be both aggressive and difficult to treat. Medullary thyroid cancer (MTC) produces calcitonin and may be found alone or linked with other endocrine cancers in a syndrome called multiple endocrine neoplasia syndrome. MTC can also be difficult to treat if it spreads beyond the thyroid.

    Thyroid Nodules—A thyroid nodule is a small lump on the thyroid gland that may be solid or a fluid-filled cyst. As many as 4% of women and 1% of men will have one or more thyroid nodules; however, the overwhelming majority of these nodules are harmless. Occasionally, thyroid nodules can be cancerous and need to be treated.

    Thyroiditis—Thyroiditis is an inflammation of the thyroid gland. It may be associated with either hypo- or hyperthyroidism. It may be painful, feeling like a sore throat, or painless. Thyroiditis may be due to autoimmune activity, an infection, exposure to a chemical that is toxic to the thyroid, or an unkown cause. Depending on the cause, it can be acute but transient or chronic.

    Goiters—A thyroid goiter is a visible enlargement of the thyroid gland. In the past, this condition was relatively common and was due to a lack of iodine in the diet. Iodine is a necessary component of thyroid hormone production. In the United States, where iodine is now routinely added to table salt (iodized) and used to clean milking cows’ udders, the incidence of dietary-related goiters has declined significantly. In other parts of the world, however, iodine-related goiters are still common and represent the most common cause of hypothyroidism in some countries. Any of the diseases listed above can also cause goiters. Goiters may compress vital structures of the neck, including the trachea and esophagus. This compression can make it difficult to breathe and swallow.

    Tests
    Laboratory Tests

    The first test your doctor will usually order to detect thyroid dysfunction is a TSH test. If your TSH level is abnormal, the doctor will usually order a total T4 or free T4 test to confirm the diagnosis. A total T3 or free T3 test may be ordered as well.

    • TSH – to test for hypothyroidism, hyperthyroidism, screen newborns for hypothyroidism, and monitor thyroid replacement therapy
    • T4 or free T4 – to test for hypothyroidism and hyperthyroidism and to screen newborns for hypothyroidism
    • T3 or free T3 – to test for hyperthyroidism

    Additional tests that may be performed include:

    • Thyroid antibodies - to help differentiate different types of thyroiditis and identify autoimmune thyroid conditions
    • Calcitonin - to help detect the presence of excessive calcitonin production

    Screening

    Screening for thyroid disease is controversial, and there is no consensus in the medical community as to who would benefit from scrrening and at what age to begin. In 2004, the U.S. Preventive Services Task Force found insufficient evidence to recommend for or against routine screening for thyroid disease in adults. However, the American Thyroid Association currently recommends that everyone over 35 years of age be screened with a TSH test every 5 years, and the American Association of Clinical Endocrinologists recommends that all women be tested for hypothyroidism by 50 years of age (sooner if they have a family history of thyroid disease) as well as those who are or planning to become pregnant in order to detect thyroid problems.

    Non-Laboratory Tests

    • Thyroid Scans – a test that uses radioactive iodine or technetium to look for thyroid gland abnormalities and to evaluate thyroid function in different areas of the thyroid
    • Ultrasound – an imaging scan that allows doctors to determine whether a nodule is solid or fluid filled and can help measure the size of the thyroid gland
    • Biopsies – often a fine-needle biopsy, a procedure that involves inserting a needle into the thyroid and removing a small amount of tissue and/or fluid from a nodule or other area that the doctor wants to examine; an ultrasound is used to guide the needle into the correct position

    Treatment

    Treatment for thyroid disease depends on the cause and the levels of hormone production. Therapy for disorders that cause hyperthyroidism may involve radioactive iodine (to destroy part or all of the thyroid), anti-thyroid drugs, or surgery to remove the thyroid. Sometimes all three of these treatments may be used. If the thyroid is destroyed or removed, the patient will become hypothyroid and will need to take synthetic thyroid hormones.

    Treatment for thyroid cancers depends on the type of cancer and how far it has spread. Thyroid cancer often requires removal of all or part of the thyroid and may involve radioactive iodine treatment and treatment with thyroid hormones. While papillary cancer is usually easily treated and most cases are cured, the others can be a challenge. In some cases, radiation and chemotherapy are used before and after surgical removal of the thyroid.

    Treatment for all types and causes of hypothyroidism is usually straightforward and involves thyroid hormone replacement therapy.

    Understanding Your Tests

    Understanding Your Tests

    Like many areas in medicine, clinical lab testing often provides few simple answers to commonly asked questions. The issues - on topics like insurance reimbursement and reference ranges - can be very complex. While we can't offer the kinds of short, easy answers that we seem to be accustomed to in this information age, we have attempted in the following articles to break down the issues in a way that will help you to understand the issues a bit better and perhaps to ask the appropriate questions of your doctor.

    Deciphering Your Lab Report
    If you've had laboratory tests performed, you may have been given a copy of the report by the lab or your health care provider. Once you get your report, however, it may not be easy for you to read or understand, leaving you with more questions than answers. This article points out some of the different sections that may be found on a typical lab report, explains some of the information that may be found in those sections, and shows you an example of what a lab report may look like.

    Reference Ranges and What They Mean
    Test results are usually interpreted based on their relation to a reference range. This article will help to explain what a reference range is and why test results and references ranges should not be interpreted in a vacuum.

    Evidence-Based Approach to Medicine Improves Patient Care
    Medical knowledge is accumulating—and changing—with such dizzying speed that the medical community has found it needs new methods to cope with it all. Evidence-based medicine (EBM) is a formalized system for helping health professionals cope with this information explosion. This article explains what EBM is and the role of laboratory testing in its application.

    How Reliable is Laboratory Testing?
    Laboratory tests drive a large part of the clinical decisions our doctors make about our health, from diagnosis through therapy and prognosis. Given the crucial role that test data play in medical decision-making, we prepared this article to help you understand the key concepts and practices that are involved in making laboratory tests reliable.

    The Universe of Genetic Testing
    An increasing number of genetic tests are becoming available as a result of recent and rapid advances in biomedical research. It has been said that genetic testing may revolutionize the way many diseases are diagnosed. But genetic testing does not just help a physician diagnose disease. This article discusses genetic testing and the different reasons genetic tests are performed.

    The World of Forensic Laboratory Testing
    Forensic testing isn't quite like what you may see on television. This article explains what forensic testing is, when it is necessary, and dispels some of the misconceptions you may have about this form of laboratory testing.

    Pharmacogenomics
    Pharmacogenomics is the study of how drugs are metabolized in the body and the variations in the genes that produce the metabolizing enzymes. It offers doctors the opportunity to individualize drug therapy for patients based on their genetic make-up. This article provides specific examples of currently available tests in this category and describes some of the benefits and concerns with this area of laboratory testing.

    Home Testing
    As health care consumers continue to seek more convenience, particularly among chronic sufferers and the elderly, the home testing market is growing rapidly. Here's a glimpse at the market and the opportunities as well as the trade offs.

    Collecting Samples for Testing
    Today, laboratory technologies allow testing on a wide variety of samples collected from the human body, beyond just blood and urine. This article provides examples of samples that can be obtained as the body naturally eliminates them, those that are quick and easy to acquire since they reside in the body's orifices, and some that require minor surgery and anesthesia to access.

    Putting New Laboratory Tests into Practice
    Did you ever wonder why and how new lab tests are developed? How do they go from development to being used in medical practice? This five-part series of articles will answer these questions and more as they describe how different types of laboratory tests are developed, validated, and made available for use by patients and their health care providers.

    Commercial Laboratory Tests and FDA Approval
    The second in the series of articles mentioned above, this discusses the types of tests that are manufactured and sold in bulk to hospital and reference laboratories, clinics, doctors' offices, and other health care facilities. In the US, the development and marketing of these commercial tests are regulated by the Food and Drug Administration (FDA), and this article describes how these types of tests are classified.

    Coping with Test Pain, Discomfort, and Anxiety
    Nobody particularly enjoys having their blood drawn or providing a urine or stool sample, but a medical test conducted on a small sample collected from your body can give your doctor information that can help save or improve the quality of your life. This series of articles has some tips on how to approach the experience with less stress. Other titles in the series include Tips on Blood Testing, Tips for Children, and Tips for the Elderly.

    Staying Healthy in an Era of Patient Responsibility
    As health care consumers have been given more responsibility for their care, more attention has been given to the value of preventive medicine. This article discusses how you can take an active role in your health care before you get sick, offering general suggestions as well as more detail on the role of screening tests.  

    Test Preparation: Your Role
    One of the most important factors in determining the accuracy and reliability of your laboratory test is you, the patient. This brief article explains your role in the process and ways in which you may need to prepare for your lab tests.

    Laboratory Methods
    Labs use a variety of methods to test the numerous analytes that are of interest to the medical community. Understanding the method used for a test provides a broader context for understanding your test results. This article provides brief explanations of several common laboratory methods mentioned on this site.

    Autoantibodies

    Autoantibodies


    What are they? | Why test them? | Common AutoAB |

    What are they ?

    Autoantibodies are a group of antibodies (immune proteins) that mistakenly target and damage specific tissues or organs of the body. One or more autoantibodies may be produced by a person’s immune system when it fails to distinguish between “self” and “non-self" proteins. Usually the immune system is able to discriminate by recognizing foreign substances (“non-self”) and ignoring the body’s own cells ("self"), yet not overreact to non-threatening substances such as foods, dust and pollen, or beneficial microorganisms. It creates antibodies only when it perceives what it has been exposed to as a threat ("non-self"). When the immune system ceases to recognize one or more of the body’s normal constituents as “self”, it may produce autoantibodies that attack its own cells, tissues, and/or organs, causing inflammation and damage. The causes of this inappropriate action are varied and are not well understood, often resulting in a chronic autoimmune disorder. While there is not a direct link, it is thought that many cases of autoantibody production are due to a genetic predisposition combined with an environmental trigger (such as a viral illness or a prolonged exposure to certain toxic chemicals). Some families have been shown to have a high prevalence of autoimmune conditions; however, individual family members may have different autoimmune disorders or may never develop an autoimmune condition. Researchers believe that there may also be a hormonal component to the cause as many of the autoimmune conditions are more common in women of childbearing age.

    The type of autoimmune disorder or disease that occurs and the amount of destruction done to the body depends on which systems or organs are targeted by the autoantibodies. Disorders caused by autoantibodies that primarily affect a single organ, such as the thyroid in Graves’ disease or Hashimoto’s thyroiditis, are often the easiest to diagnose as they frequently present with organ-related symptoms.

    Disorders due to systemic autoantibodies (affects multiple organs or systems) can be much more elusive. Although the associated autoimmune disorders are rare, the signs and symptoms they cause are relatively common and may include: arthritis-type joint pain, fatigue, fever, rashes, cold or allergy-type symptoms, weight loss, and muscular weakness. Additional complications may include vasculitis and anemia. Signs and symptoms will vary from person to person. They can vary over time and/or with organ involvement, often tapering off and then flaring up unexpectedly. To complicate the situation, some may have more than one autoantibody, have more than one autoimmune disorder, and/or have an autoimmune disorder without a detectable level of an autoantibody. This may make it difficult for the doctor to identify the prime cause and arrive at a diagnosis.

    The diagnosis of autoimmune disorders starts with a complete medical history and a thorough physical exam. The doctor may request one or more diagnostic studies that will help to identify a specific disease. These studies may include:

    • blood tests to detect autoantibodies, inflammation, and organ involvement/damage.
    • x-rays and other imaging scans to detect changes in bones, joints, and organs.
    • biopsies to look for pathologic changes in tissue specimens.

    As a rule, information is required from multiple sources (rather than a single laboratory test) to accurately diagnose disorders associated with systemic autoantibodies. 


    Why are autoantibody tests done?

    Autoantibody tests are used to help diagnose autoimmune disorders. In a few cases, they are used to help evaluate the severity of the condition, monitor remissions, flares and relapses of the disorder and to monitor the effectiveness of treatments.

    Autoantibody tests may be ordered when a patient presents with chronic, progressive arthritic symptoms, fever, fatigue, muscle weakness, and/or a rash that can not readily be explained. One test initially ordered is the Antinuclear antibody (ANA) test, a marker of the autoimmune process that is positive with a variety of autoimmune diseases. It may be positive in systemic lupus erythematosus, Sjögren’s syndrome, rheumatoid arthritis, autoimmune hepatitis, primary biliary cirrhosis, alcohol-related liver disease, and hepatitis B. It is frequently followed up with other specific autoantibody tests, such as anti-Double Strand DNA (anti-dsDNA), anti-Sjögren’s Syndrome A (anti-SS-A) (Ro), anti-Sjögren’s Syndrome B (anti-SS-B) (La) and Anti-Ribonucleic Protein (anti-RNP). In addition, other tests associated with arthritis and inflammation, such as an rheumatoid factor (RF), an erythrocyte sedimentation rate (ESR), a C-Reactive Protein (CRP), and/or complement levels (C3, C4), may also be performed.

    A single autoantibody test is not diagnostic, but it adds weight to the doctor’s determination as to whether a particular autoimmune disorder is likely or unlikely to be present. Each autoantibody result should be considered individually and as part of the group. Some systemic disorders, such as Systemic Lupus Erythematosus (SLE) may be more likely if several autoantibodies are present, while others, such as Mixed Connective Tissue Disease (MCTD) may be more likely if a single autoantibody (anti-RNP - ribonucleic protein) is the only one present. Those who have more than one autoimmune disorder may have several detectable autoantibodies.

    Whether a particular autoantibody will be present is both very individual and a matter of statistics. Each will be present in a certain percentage of people who have a particular autoimmune disorder. For instance, up to 80% of those with SLE will have a positive anti-double strand DNA (anti-dsDNA) test, but only about 25-30% will have a positive anti-RNP. Some individuals who do have an autoimmune disorder will have negative autoantibody test results, but at a later date – as the disorder progresses - the autoantibodies may develop. A small percentage of the general population may have one or more autoantibodies present in their blood with no associated symptoms. Autoantibodies are more commonly found in older people.


    Common Autoantibodies and Disease Associations

    Systemic autoantibodies

    The table below includes just some of the more common autoantibodies that are used to identify a variety of systemic autoimmune disorders. These disorders cause inflammatory, arthritis-type symptoms. The table identifies whether each autoantibody test is generally positive with several of these disorders.

    Expand TableCommon Systemic Autoantibodies

    +  antibody is present in most people with this condition (positive);    +/-  antibody may or may not be present
    AutoantibodySLESclero-dermaSjögrensDermato-myosistis/
    Poly-myositis
    Mixed
    Connective
    Tissue
    Disease
    RAWegener's
    Granulo-matosis
    Celiac
    Disease
    Antinuclear Antibody (ANA) + + + +/- +/- +    
    Anti-neutrophil Cytoplasmic Antibody (ANCA)             +  
    Anti-Sjögren’s Syndrome A (Anti-SS-A) (Ro) +/-   +     +/-    
    Anti-Sjögren’s Syndrome B (Anti-SS-B) (La)     +     +/-    
    Cardiolipin autoantibodies +              
    Anti-Double Strand DNA (Anti-dsDNA) + +/- +/-   +/- +/-    
    Rheumatoid Factor (RF) +/-  +/- +/-   +/- +    
    Anti-Jo-1       +/- +/-      
    Anti-Ribonucleic Protein (Anti-RNP) +/- +/- +   + +/-    
    Antiscleroderma Antibody (Anti-SCL-70)     +/- +/- +/-      
    Anti-Smith (Anti-SM) +   +/-   +/-      
    Endomysial/ Gliadin Autoantibodies               +

    Organ-specific autoantibodies

    Analytes,bilirubin

    Bilirubin


    Also known as: Total bilirubin; TBIL; Neonatal bilirubin; Direct bilirubin; Conjugated bilirubin; Indirect bilirubin; Unconjugated bilirubin
    Formal name: Bilirubin

    The Test Sample

    What is being tested?

    Bilirubin is an orange-yellow pigment found in bile. Red blood cells (RBCs) normally degrade after 120 days in the circulation. At this time, a component of the RBCs, hemoglobin (the red-colored pigment of red blood cells that carries oxygen to tissues), breaks down into unconjugated bilirubin. Approximately 250 to 350 mg of bilirubin is produced daily in a normal, healthy adult, of which 85% is derived from damaged or old red cells that have died, with the remaining amount from the bone marrow or liver.

    Unconjugated bilirubin is carried to the liver, where sugars are attached to it to make it water soluble, producing conjugated bilirubin. This conjugated bilirubin is passed to the bile by the liver and is further broken down by bacteria in the small intestines and eventually excreted in the feces. The breakdown products of bilirubin give feces its characteristic brown color. If bilirubin levels increase in the blood, the appearance of jaundice becomes more evident. Normally, almost all bilirubin in the blood is unconjugated.

    How is the sample collected for testing?

    In newborns, blood is often collected from a heelstick, a technique that uses a small, sharp blade to cut the skin on the infant’s heel and collect a few drops of blood into a small tube. For adults, blood is typically collected by needle from a vein. Non-invasive technology is available in some health care facilities that will measure bilirubin by using an instrument placed on the skin (transcutaneous bilirubin meter).

    NOTE: If undergoing medical tests makes you or someone you care for anxious, embarrassed, or even difficult to manage, you might consider reading one or more of the following articles: Coping with Test Pain, Discomfort, and Anxiety, Tips on Blood Testing, Tips to Help Children through Their Medical Tests, and Tips to Help the Elderly through Their Medical Tests.

    Another article, Follow That Sample, provides a glimpse at the collection and processing of a blood sample and throat culture.

    Is any test preparation needed to ensure the quality of the sample?

    No test preparation is needed.

    The Test

    How is it used?

    When bilirubin levels are high, a condition called jaundice occurs, and further testing is needed to determine the cause. Too much bilirubin may mean that too much is being produced (usually due to increased hemolysis) or that the liver is incapable of adequately removing bilirubin in a timely manner due to blockage of bile ducts, liver diseases such as cirrhosis, acute hepatitis, or inherited problems with bilirubin processing.

    It is not uncommon to see high bilirubin levels in newborns, typically 1 to 3 days old. This is sometimes called physiologic jaundice of the newborn. Within the first 24 hours of life, up to 50% of full-term newborns, and an even greater percentage of pre-term babies, may have a high bilirubin level. After birth, newborns begin breaking down the excess red blood cells (RBCs) they are born with and, since the newborn’s liver is not fully mature, it is unable to process the extra bilirubin, causing the infant's bilirubin levels to rise in the blood and other body tissues. This situation usually resolves itself within a few days. In other instances, newborns’ red blood cells may be being destroyed because of blood incompatibilities between the baby and her mother, called hemolytic disease of the newborn.

    In adults or older children, bilirubin is measured to diagnose and/or monitor liver diseases, such as cirrhosis, hepatitis, or gallstones. Patients with sickle cell disease or other causes of hemolytic anemia may have episodes where excessive RBC destruction takes place, increasing bilirubin levels.


    When is it ordered?

    A doctor usually orders a bilirubin test in conjunction with other laboratory tests (alkaline phosphatase, aspartate aminotransferase, alanine aminotransferase) for a patient who shows signs of abnormal liver function. A bilirubin level may be ordered when a patient:
    • shows evidence of jaundice
    • has a history of drinking excessive amounts of alcohol
    • has suspected drug toxicity
    • has been exposed to hepatitis viruses

    Other symptoms that may be present include:

    • dark, amber-colored urine
    • nausea/vomiting
    • abdominal pain and/or swelling
    • fatigue and general malaise that often accompany chronic liver disease

    Determining a bilirubin level in newborns with jaundice is considered standard medical care.


    What does the test result mean?


    Newborns: Excessive bilirubin damages developing brain cells in infants (kernicterus) and may cause mental retardation, learning and developmental disabilities, hearing loss, or eye movement problems. It is important that bilirubin in newborns does not get too high. When the level of bilirubin is above a critical threshold, special treatments are initiated to lower it. An excessive bilirubin level may result from the accelerated breakdown of red blood cells due to a blood type incompatibility between the mother and her newborn (e.g., the mother is Rh-negative and has antibody to Rh-positive blood - the father is Rh-positive, and the fetus inherits this trait from him; the mother’s antibody crosses the placenta and causes the fetal Rh-positive red blood cells to hemolyze, resulting in excessively elevated bilirubin levels with jaundice, anemia, and possible kernicterus.)

    Adults and children: Bilirubin levels can be used to identify liver damage/disease or to monitor the progression of jaundice. Increased total or unconjugated bilirubin may be a result of hemolytic, sickle cell or pernicious anemias or a transfusion reaction. If conjugated bilirubin is elevated, there may be some kind of blockage of the liver or bile ducts, hepatitis, trauma to the liver, cirrhosis, a drug reaction, or long-term alcohol abuse.

    Inherited disorders that cause abnormal bilirubin metabolism (Gilbert’s, Rotor’s, Dubin-Johnson, Crigler-Najjar syndromes) may also cause increased levels.

    Low levels of bilirubin are not generally a concern and are not monitored.


    Is there anything else I should know?

    Although bilirubin may be toxic to brain development in newborns (up to the age of about 2–4 weeks), high bilirubin in older children and adults does not pose the same threat. In older children and adults, the “blood-brain barrier” is more developed and prevents bilirubin from crossing this barrier to the brain cells. Elevated bilirubin levels in children or adults, however, strongly suggest a medical condition that must be evaluated and treated.

    Bilirubin is not normally present in the urine. However, conjugated bilirubin is water-soluble and therefore may be excreted from the body in the urine when levels increase in the body. Its presence in the urine usually indicates blockage of liver or bile ducts, hepatitis or some other liver damage. The most common method for detecting urine bilirubin is using the dipstick test that is part of a urinalysis.

    Bilirubin levels tend to be slightly higher in males than females, while African Americans show lower values. Strenuous exercise may also increase bilirubin levels.


    Common Questions

    1.  Are some people more at genetic risk of abnormal bilirubin levels?

    Several inherited chronic conditions include Gilbert’s syndrome, Dubin-Johnson syndrome, Rotor’s syndrome, and Crigler-Najjar syndrome. Of these four syndromes, Crigler-Najjar is the most serious and may result in death. The first three are usually mild, chronic conditions that can be aggravated under certain conditions but in general cause no significant health problems.


    2.  How do you treat abnormal bilirubin levels and/or jaundice?

    Treatment depends on the cause of the jaundice. In newborns, phototherapy (special light therapy), blood exchange transfusion in severe cases, and certain drugs may reduce the bilirubin level. In Gilbert’s, Rotor’s, and Dubin-Johnson syndromes, no treatment is usually necessary. Crigler-Najjar syndrome may respond to certain enzyme drug therapy or may require a liver transplant. Jaundice caused by an obstruction often is resolved by surgery to remove the blockage. Jaundice due to cirrhosis is often a result of long-term viral hepatitis or alcohol abuse and may not respond well to any type of therapy. Anti-viral medications, abstaining from alcohol, avoiding other potential liver toxins, and good nutrition may improve the situation if the liver has not been damaged too badly.


    3.  Is there anything I can do to maintain healthy bilirubin levels?

    While there is no one specific thing, it is clear that excessive and long- term alcohol consumption can lead to cirrhosis and a permanently damaged liver. Avoiding alcohol and over-use/long-term use of drugs and eating a proper diet may help to sustain a healthy liver. Blockages due to duct stones, pancreatic cancer, or cysts may require surgery.

    Bilirubin

    Bilirubin

    a brownish yellow pigment of bile, secreted by the liver in vertebrates, which gives to solid waste products (feces) their characteristic colour. It is produced in bone marrow cells and in the liver as the end product of red-blood-cell (hemoglobin) breakdown. The amount of bilirubin manufactured relates directly to the quantity of blood cells destroyed. About 0.5 to 2 grams are produced daily. It has no known function and can be toxic to the fetal brain.

    Bilirubin in the bloodstream is usually in a free, or unconjugated, state; it is attached to albumin, a protein, as it is transported. Once in the liver it conjugates with glucuronic acid made from the sugar glucose. It is then concentrated to about 1,000 times the strength found in blood plasma. Much bilirubin leaves the liver and passes to the gallbladder, where it is further concentrated and mixed with the other constituents of bile. Bile stones can originate from bilirubin, and certain bacteria can infect the gallbladder and change the conjugated bilirubin back to free bilirubin and acid. The calcium from the freed bilirubin can settle out as pigment stones, which may eventually block the passageway (common bile duct) between the liver, gallbladder, and small intestine. When blockage occurs, conjugated bilirubin is absorbed into the bloodstream, and the skin becomes yellow in colour (see jaundice).

    Normally, conjugated bilirubin passes from the gallbladder or liver into the intestine. There, it is reduced by bacteria to mesobilirubinogen and urobilinogen. Some urobilinogen is reabsorbed back into the blood; the rest goes back to the liver or is excreted from the body in urine and fecal matter. In humans, bilirubin is believed to be unconjugated until it reaches the liver. In dogs, sheep, and rats, there is no bilirubin in the blood, though it is present in the liver.

    Blood analysis

    blood analysis

    laboratory examination of a sample of blood to obtain information about its physical and chemical properties. Blood analysis is commonly carried out on a sample of blood drawn from the vein of the arm, the finger, or the earlobe; in some cases, the blood cells of the bone marrow may also be examined. Hundreds of hematological tests and procedures have been developed, and many can be carried out simultaneously on one sample of blood with such instruments as autoanalyzers. Blood analysis includes the following areas of study:

    1. Determination of the number of red blood cells (erythrocytes) and white blood cells (leukocytes) in the blood, together with the volume, sedimentation rate, and hemoglobin concentration of the red blood cells (blood count).
    2. Classification of the blood according to specific red blood cell antigens, or blood groups (see blood typing).
    3. Elucidation of the shape and structural details of blood cells.
    4. Study of the structure of hemoglobin and other blood proteins.
    5. Determination of the activity of various enzymes, or protein catalysts, that either are associated with the blood cells or are found free in the blood plasma.
    6. Study of blood chemistry.

    Other properties of blood that may be included in an analysis are total volume, circulation time, viscosity, clotting time and clotting abnormalities, acidity (pH), level of oxygen and carbon dioxide, and clearance rate of various substances (see kidney function test). In addition to the wide variety of procedures devised for the study of normal blood constituents, there are also special tests based on the presence in the blood of substances characteristic of specific infections, such as the serological tests for syphilis, hepatitis, and human immunodeficiency virus (HIV; the AIDS virus).

    Nobel Prize winners  

    Ziegler, Karl

    born Nov. 26, 1898, Helsa, near Kassel, Ger
     
    died Aug. 12, 1973, Mülheim, W.Ger.

    German chemist who shared the 1963 Nobel Prize for Chemistry with the Italian chemist Giulio Natta. Ziegler's research with organometallic compounds made possible industrial production of high-quality polyethylene. Natta used Ziegler's organometallic compounds to make commercially useful polypropylene.

    Early life and education

    Reading an introductory physics textbook first whetted Ziegler's interest in science. It drove him to perform experiments in his home and to read extensively beyond his high school curriculum. His father, a Lutheran minister, often invited professors from the nearby University of Marburg for dinner. These combined influences help explain why he received an award for most outstanding student in his final year of high school and how he skipped his first year of courses at the University of Marburg, from which he received a doctorate in chemistry in 1920. He married Maria Kurtz in 1922, and in 1925 he completed his habilitation thesis, a prerequisite for a university position.

     

    Scientific career

    After serving as a lecturer at Marburg and at the University of Frankfurt (1925–26), Ziegler accepted a professorship at the University of Heidelberg (1926–36). He began his research on carbon compounds and organometallic chemistry in Heidelberg. In 1936 Ziegler used his international reputation to secure the directorship of the chemical institute at the University of Halle. The Kaiser Wilhelm Institute for Coal Research (now the Max Planck Institute for Coal Research, one institute in the Max Planck Society for the Advancement of Science) in Mülheim offered Ziegler its directorship in 1943, which he accepted only after the institute gave him complete freedom to choose and implement his research topics and to keep patent rights and royalties on new inventions. For nearly two years, he commuted between his family in Halle and Mülheim, but with the approach of the advancing Russian army, they fled to Mülheim in 1945. In 1949 he helped reorganize the German Chemical Society and served as its president (1949–51).

    Ziegler combined classical organic chemistry with physical and analytical experimental methods in his pioneering polymerization syntheses. A longtime interest in lithium's reaction with butadiene, the starting compound for synthetic rubber production, led him to discover that ethylene reacted similarly to butadiene. In 1953 he prepared straight-chain polyethylene, the first plastic with a high melting point and large molecular weight. In 1900 the French organic chemist Victor Grignard had found that organomagnesium bromides (methylmagnesium bromide) reacted with acidic substances to produce longer-chain hydrocarbons and alcohols. Ziegler's early work on organosodium, organopotassium, and organolithium compounds in the 1940s and '50s showed that organolithium compounds were much stronger reagents than Grignard reagents (organic derivatives of magnesium). Instead of random chain branching resulting in low-melting point polymers of ethylene and other monomers, Ziegler's research enabled chemists to synthesize more durable, higher-melting, and unbranched polymers.

     

    Polyethylene

    Between 1952 and 1953, Ziegler and Hans-Georg Gellert, one of his former students from Halle, found that in the polymerization reaction organolithium compounds, except for lithium aluminum hydride, irreversibly decomposed into lithium hydride and an alkyl. To establish whether lithium or aluminum was the more active metal, Gellert tested organoaluminum compounds. Triethylaluminum added several ethylene molecules end-to-end, but the carbon atom chains differed in length because a competing chain-ending reaction stopped the polymerization at different carbon atoms in the chain. Ziegler's research associate, Heinz Martin, and two graduate students, Erhard Holzkamp and Heinz Breil, discovered the cause of the chain-ending reaction. Holzkamp reacted isopropylaluminum and ethylene in a stainless-steel autoclave at 100 to 200 atmospheres and 100 °C (212 °F). They expected to produce an odd-numbered alkene (an organic compound with a double carbon bond) but instead obtained exclusively 1-butene. Further investigation by Ziegler and Holzkamp revealed that acidic cleaning of the autoclave wall released traces of nickel, which had stopped the polymerization reaction. Holzkamp confirmed this conclusion by deliberately adding nickel salts to the triethylaluminum-ethylene mixture in a glass reactor.

    Having discovered the cause of the chain-ending reaction, Ziegler needed a reagent to suppress it, and so he delegated Holzkamp and Breil to test other metals closely related to nickel. Holzkamp reacted chromium and produced polyethylene along with butene and other alkenes. Breil tested several closely related transition elements with disappointing results until he found that zirconium and titanium accelerated the polymerization reaction. A combination of high pressure, high temperature, and titanium charred and decomposed polyethylene, so Martin tested titanium under atmospheric conditions and produced polyethylene. The result of this research program was a rigid, high-melting, unbranched, strong polyethylene that chemists could prepare under mild conditions. Ziegler, meanwhile, had reorganized the institute, delegating administrative detail because he preferred to work on this research.

    A large measure of Ziegler's success came from his ability to go where the experiments went, regardless of whether they corroborated his previous ideas. Moreover, he visualized pure research as a method of gaining knowledge beneficial to society. He demonstrated the industrial applications of his research and marketed them accordingly. His discoveries in aluminum chemistry led to the production of long-chain, high-molecular-weight alcohols commonly used in detergents and to the construction of commercial-scale plants in the United States and Germany. By 1958 Ziegler had received dozens of licenses, which gave him an annual income of several million dollars.

     

    Awards and later years

    Besides his Nobel Prize, numerous scientific and chemical societies around the world elected Ziegler an honorary member, and he received many medals for his work, including the Lavoisier Medal of the French Chemical Society, the Carl Duisberg Award of the German Chemical Society, and the Swinburne Medal of the Plastics Institute, London.

    Ziegler retired from the institute in 1969 and became an honorary senator of the institute. Because his patent agreement with the institute made him wealthy, he set up the Ziegler Fund with some 40 million deutsche marks to support the institute's research. He also traveled around the world on cruises with his family and even chartered airplanes for eclipse viewing. During a 1972 eclipse-viewing cruise with his grandson, he became ill, and he died the following year.

    blood group

    Blood group

    Introduction



    classification of blood based on inherited differences (polymorphisms) in antigens on the surfaces of the red blood cells (erythrocytes). Inherited differences of white blood cells (leukocytes), platelets (thrombocytes), and plasma proteins also constitute blood groups, but they are not included in this discussion.
     

    Historical background

    English physician William Harvey announced his observations on the circulation of the blood in 1616 and published his famous monograph titled Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus (The Anatomical Exercises Concerning the Motion of the Heart and Blood in Animals) in 1628. His discovery, that blood circulates around the body in a closed system, was an essential prerequisite of the concept of transfusing blood from one animal to another of the same or different species. In England, experiments on the transfusion of blood were pioneered in dogs in 1665 by physician Richard Lower. In November 1667 Lower transfused the blood of a lamb into a man. Meanwhile, in France, Jean-Baptiste Denis, court physician to King Louis XIV, had also been transfusing lambs' blood into human subjects and described what is probably the first recorded account of the signs and symptoms of a hemolytic transfusion reaction. Denis was arrested after a fatality, and the procedure of transfusing the blood of other animals into humans was prohibited, by an act of the Chamber of Deputies in 1668, unless sanctioned by the Faculty of Medicine of Paris. Ten years later, in 1678, the British Parliament also prohibited transfusions. Little advance was made in the next 150 years.

    In England in the 19th century, interest was reawakened by the activities of obstetrician James Blundell, whose humanitarian instincts had been aroused by the frequently fatal outcome of hemorrhage occurring after childbirth. He insisted that it was better to use human blood for transfusion in such cases.

    In 1875 German physiologist Leonard Landois showed that, if the red blood cells of an animalbelonging to one species are mixed with serum taken from an animal of another species, the red cells usually clump and sometimes burst—i.e., hemolyze. He attributed the appearance of black urine after transfusion of heterologous blood (blood from a different species) to the hemolysis of the incompatible red cells. Thus, the dangers of transfusing blood of another species to humans were established scientifically.

    The human ABO blood groups were discovered by Austrian-born American biologist Karl Landsteiner in 1901. Landsteiner found that there are substances in the blood, antigens and antibodies, that induce clumping of red cells when red cells of one type are added to those of a second type. He recognized three groups—A, B, and O—based on their reactions to each other. A fourth group, AB, was identified a year later by another research team. Red cells of the A group clump with donor blood of the B group; those of the B group clump with blood of the A group; those of the AB group clump with those of the A or the B group because AB cells contain both A and B antigens; and those of the O group do not generally clump with any group, because they do not contain either A or B antigens. The application of knowledge of the ABO system in blood transfusion practice is of enormous importance, since mistakes can have fatal consequences.

    The discovery of the Rh system by Landsteiner and Alexander Wiener in 1940 was made because they tested human red cells with antisera developed in rabbits and guinea pigs by immunization of the animals with the red cells of the rhesus monkey Macaca mulatta.

    tableOther blood groups were identified later, such as Kell, Diego, Lutheran, Duffy, and Kidd. The remaining blood group systems were first described after antibodies were identified in patients. Frequently, such discoveries resulted from the search for the explanation of an unexpected unfavourable reaction in a recipient after a transfusion with formerly compatible blood. In such cases the antibodies in the recipient were produced against previously unidentified antigens in the donor's blood. In the case of the Rh system, for example, the presence of antibodies in the maternal serum directed against antigens present on the child's red cells can have serious consequences because of antigen-antibody reactions that produce erythroblastosis fetalis, or hemolytic disease of the newborn. Some of the other blood group systems—for example, the Kell and Kidd systems—were discovered because an infant was found to have erythroblastosis fetalis even though mother and child were compatible as far as the Rh system was concerned. In the table the well-established human blood group systems are listed in the order of discovery.

     

    The importance of antigens and antibodies

    The red cells of an individual contain antigens on their surfaces that correspond to their blood group and antibodies in the serum that identify and combine with the antigen sites on the surfaces of red cells of another type. The reaction between red cells and corresponding antibodies usually results in clumping—agglutination—of the red cells; therefore, antigens on the surfaces of these red cells are often referred to as agglutinogens.

    Antibodies are part of the circulating plasma proteins known as immunoglobulins, which are classified by molecular size and weight and by several other biochemical properties. Most blood group antibodies are found either on immunoglobulin G (IgG) or immunoglobulin M (IgM) molecules, but occasionally the immunoglobulin A (IgA) class may exhibit blood group specificity. Naturally occurring antibodies are the result of immunization by substances in nature that have structures similar to human blood groups. These antibodies are present in an individual despite the fact that there has been no previous exposure to the corresponding red cell antigens—for example, anti-A in the plasma of people of blood group B and anti-B in the plasma of people of blood group A. Immune antibodies are evoked by exposure to the corresponding red cell antigen. Immunization (i.e., the production of antibodies in response to antigen) against blood group antigens in humans can occur as a result of pregnancy, blood transfusion, or deliberate immunization. The combination of pregnancy and transfusion is a particularly potent stimulus. Individual blood group antigens vary in their antigenic potential; for example, some of the antigens belonging to the Rh and ABO systems are strongly immunogenic (i.e., capable of inducing antibody formation), whereas the antigens of the Kidd and Duffy blood group systems are much weaker immunogens.

    The blood group antigens are not restricted solely to red cells or even to hematopoietic tissues. The antigens of the ABO system are widely distributed throughout the tissues and have been unequivocally identified on platelets and white cells (both lymphocytes and polymorphonuclear leukocytes) and in skin, the epithelial (lining) cells of the gastrointestinal tract, the kidney, the urinary tract, and the lining of the blood vessels. Evidence for the presence of the antigens of other blood group systems on cells other than red cells is less well substantiated. Among the red cell antigens, only those of the ABO system are regarded as tissue antigens and therefore need to be considered in organ transplantation.

     

    Chemistry of the blood group substances

    The exact chemical structure of some blood groups has been identified, as have the gene products (i.e., those molecules synthesized as a result of an inherited genetic code on a gene of a chromosome) that assist in synthesizing the antigens on the red cell surface that determine the blood type. Blood group antigens are present on glycolipid and glycoprotein molecules of the red cell membrane. The carbohydrate chains of the membrane glycolipids are oriented toward the external surface of the red cell membrane and carry antigens of the ABO, Hh, Ii, and P systems. Glycoproteins, which traverse the red cell membrane, have a polypeptide backbone to which carbohydrates are attached. An abundant glycoprotein, band 3, contains ABO, Hh, and Ii antigens. Another integral membrane glycoprotein, glycophorin A, contains large numbers of sialic acid molecules and MN blood group structures; another, glycophorin B, contains Ss and U antigens.

    The genes responsible for inheritance of ABH and Lewis antigens are glycosyltransferases (a group of enzymes that catalyze the addition of specific sugar residues to the core precursor substance). For example, the H gene codes for the production of a specific glycosyltransferase that adds l-fucose to a core precursor substance, resulting in the H antigen; the Le gene codes for the production of a specific glycosyltransferase that adds l-fucose to the same core precursor substance, but in a different place, forming the Lewis antigen; the A gene adds N-acetyl-d-galactosamine (H must be present), forming the A antigen; and the B gene adds d-galactose (H must be present), forming the B antigen. The P system is analogous to the ABH and Lewis blood groups in the sense that the P antigens are built by the addition of sugars to precursor globoside and paragloboside glycolipids, and the genes responsible for these antigens must produce glycosyltransferase enzymes.

    The genes that code for MNSs glycoproteins change two amino acids in the sequence of the glycoprotein to account for different antigen specificities. Additional analysis of red cell membrane glycoproteins has shown that in some cases the absence of blood group antigens is associated with an absence of minor membrane glycoproteins that are present normally in antigen-positive persons.

     

    Methods of blood grouping

    Identification of blood groups

    The basic technique in identification of the antigens and antibodies of blood groups is the agglutination test. Agglutination of red cells results from antibody cross-linkages established when different specific combining sites of one antibody react with antigen on two different red cells. By mixing red cells (antigen) and serum (antibody), either the type of antigen or the type of antibody can be determined depending on whether a cell of known antigen composition or a serum with known antibody specificity is used.

    In its simplest form, a volume of serum containing antibody is added to a thin suspension (2–5 percent) of red cells suspended in physiological saline solution in a small tube with a narrow diameter. After incubation at the appropriate temperature, the red cells will have settled to the bottom of the tube. These sedimented red cells are examined macroscopically (with the naked eye) for agglutination, or they may be spread on a slide and viewed through a low-power microscope.

    An antibody that agglutinates red cells when they are suspended in saline solution is called a complete antibody. With powerful complete antibodies, such as anti-A and anti-B, agglutination reactions visible to the naked eye take place when a drop of antibody is placed on a slide together with a drop containing red cells in suspension. After stirring, the slide is rocked, and agglutination is visible in a few minutes. It is always necessary in blood grouping to include a positive and a negative control for each test.

    An antibody that does not clump red cells when they are suspended in saline solution is called incomplete. Such antibodies block the antigenic sites of the red cells so that subsequent addition of complete antibody of the same antigenic specificity does not result in agglutination. Incomplete antibodies will agglutinate red cells carrying the appropriate antigen, however, when the cells are suspended in media containing protein. Serum albumin from the blood of cattle is a substance that is frequently used for this purpose. Red cells may also be rendered specifically agglutinable by incomplete antibodies after treatment with such protease enzymes as trypsin, papain, ficin, or bromelain.

    After such infections as pneumonia, red cells may become agglutinable by almost all normal sera because of exposure of a hidden antigenic site (T) as a result of the action of bacterial enzymes. When the patient recovers, the blood also returns to normal with respect to agglutination. It is unusual for the red cells to reflect antigenicity other than that determined by the individual's genetic makeup. The presence of an acquired B antigen on the red cells has been described occasionally in diseases of the colon, thus allowing the red cell to express an antigenicity other than that genetically determined. Other diseases may alter immunoglobulins; for example, some may induce the production of antibodies directed against the person's own blood groups (autoimmune hemolytic anemia) and thus may interfere with blood grouping. In other diseases a defect in antibody synthesis may cause the absence of anti-A and anti-B antibody.

     

    Coombs test

    When an incomplete antibody reacts with the red cells in saline solution, the antigenic sites become coated with antibody globulin (gamma globulin), and no visible agglutination reaction takes place. The presence of gamma globulin on cells can be detected by the Coombs test, named for its inventor, English immunologist Robert Coombs. Coombs serum (also called antihuman globulin) is made by immunizing rabbits with human gamma globulin. The rabbits respond by making antihuman globulin (i.e., antibodies against human gamma globulin and complement) that is then purified before use. The antihuman globulin usually contains antibodies against IgG and complement. Coombs serum is added to the washed cells; the tube is centrifuged; and, if the cells are coated by gamma globulin or complement, agglutinates will form. Newer antiglobulin reagents (made by immunizing with purified protein) can detect either globulin or complement. Depending on how it is performed, the Coombs test can detect incomplete antibody in the serum or antibody bound to the red cell membrane. In certain diseases, anemia may be caused by the coating of red cells with gamma globulin. This can happen when a mother has made antibodies against the red cells of her newborn child or if a person makes an autoantibody against his own red cells.

     

    Adsorption, elution, and titration

    If a serum contains a mixture of antibodies, it is possible to prepare pure samples of each by a technique called adsorption. In this technique an unwanted antibody is removed by mixing it with red cells carrying the appropriate antigen. The antigen interacts with the antibody and binds it to the cell surface. These red cells are washed thoroughly and spun down tightly by centrifugation, all the fluid above the cells is removed, and the cells are then said to be packed. The cells are packed to avoid dilution of the antibody being prepared. Adsorption, then, is a method of separating mixtures of antibodies by removing some and leaving others. It is used to identify antibody mixtures and to purify reagents. The purification of the Coombs serum (see above) is done in the same way.

    If red cells have adsorbed gamma globulin onto their surfaces, the antibody can sometimes be recovered by a process known as elution. One simple way of eluting (dissociating) antibody from washed red cells is to heat them at 56 °C (133 °F) in a small volume of saline solution. Other methods include use of acid or ether. This technique is sometimes useful in the identification of antibodies.

    Titration is used to determine the strength of an antibody. Doubling dilutions of the antibody are made in a suitable medium in a series of tubes. Cells carrying the appropriate antigen are added, and the agglutination reactions are read and scored for the degree of positivity. The actual concentration of the antibody is given by the dilution at which some degree of agglutination, however weak, can still be seen. This would not be a safe dilution to use for blood-grouping purposes. If an antiserum can be diluted, the dilution chosen must be such that strong positive reactions occur with selected positive control cells. Titration is helpful when preparing reagents and comparing antibody concentrations at different time intervals.

     

    Inhibition tests

    Inhibition tests are used to detect the presence of antigen with blood group specificity in solutions; inhibition of a known antibody-antigen reaction by a fluid indicates a particular blood group specificity. If an active substance is added to antibody, neutralization of the antibody's activity prevents agglutination when red cells carrying the appropriate antigen are subsequently added to the mixture. A, B, Lewis, Chido, Rogers, and P antigens are readily available and can be used to facilitate antibody identification. This technique was used to elucidate the biochemistry of ABH, Ii, and Lewis systems, and it is important in forensic medicine as a means of identifying antigens in blood stains.

     

    Hemolysis

    Laboratory tests in which hemolysis (destruction) of the red cells is the end point are not used frequently in blood grouping. For hemolysis to take place, a particular component of fresh serum called complement must be present. Complement must be added to the mixture of antibody and red cells. It may sometimes be desirable to look for hemolysins that destroy group A red cells in mothers whose group A children are incompatible or in individuals, not belonging to groups A or AB, who have been immunized with tetanus toxoid that contains substances with group A specificity.

    Hemolytic reactions may occur in patients who have been given transfusions of blood that either is incompatible or has already hemolyzed. The sera of such patients require special investigations to detect the presence of hemoglobin that has escaped from red cells destroyed within the body and for the breakdown products of other red cell constituents.

     

    Sources of antibodies and antigens

    Normal donors are used as the source of supply of naturally occurring antibodies, such as those of the ABO, P, and Lewis systems. These antibodies work best at temperatures below that of the body (37 °C, or 98.6 °F); in the case of what are known as cold agglutinins, such as anti-P1, the antibody is most active at 4 °C (39 °F). Most antibodies used in blood grouping must be searched for in immunized donors.

    Antibodies for MN typing are usually raised in rabbits—similarly for the Coombs serum. Antibodies prepared in this way have to be absorbed free of unwanted components and carefully standardized before use. Additional substances with specific blood group activity have been found in certain plants. Plant agglutinins are called lectins. Some useful reagents extracted from seeds are anti-H from Ulex europaeus (common gorse); anti-A1, from another member of the pulse family Fabaceae (Leguminosae), Dolichos biflorus; and anti-N from the South American plant Vicia graminea. Agglutinins have also been found in animals—for example, the fluid pressed from the land snail Octala lactea. Additional plant lectins and agglutinins from animal fluids have been isolated.

    Monoclonal antibodies (structurally identical antibodies produced by hybridomas) to blood groups are replacing some of the human blood grouping reagents. Mouse hybridomas (hybrid cells of a myeloma tumour cell and lymphocyte merging) produce anti-A and anti-B monoclonal antibodies. The antibodies are made by immunizing with either red cells or synthetic carbohydrates. In addition to their use in blood grouping, these monoclonal antibodies can be of use in defining the hereditary background (heterogenicity) and structure of the red cell antigen.

     

    Uses of blood grouping

    Transfusion

    tableThe blood donated by healthy persons is tested to ensure that the level of hemoglobin is satisfactory and that there is no risk of transmitting certain diseases, such as AIDS or hepatitis. It is then fractionated (split) into its component parts, particularly red cells, plasma, and platelets. Correct matching for the ABO system is vital. Compatible donors on the basis of their possessing A, B, or O blood are shown in the table.

    As explained above, the most important blood group systems for transfusion of red cells are ABO and Rh. Persons who have either of the red cell antigens (A and B) have antibody present in their serum of the type that will oppose an antigen of its opposite nature; for example, group A blood contains A antigens on red cell surfaces and anti-B antibodies in the surrounding serum. On the other hand, O group individuals lack both the A and the B antigen and thus have both anti-A and anti-B in their serum. If these antibodies combine with the appropriate antigen, the result is hemolytic transfusion reaction and possibly death. Red cell transfusions must therefore be ABO compatible. The blood groups A and B have various subgroups (e.g., A1, A2, A3, and B1, B2, and B3). The only common subgroups that are likely to affect red cell transfusions are the subgroups of A.

    Potential donors are also tested for some of the antigens of the Rh system, since it is essential to know whether they are Rh-positive or Rh-negative. Rh-negative indicates the absence of the D antigen. Rh-negative persons transfused with Rh-positive blood will make anti-D antibodies from 50 to 75 percent of the time. Antibody made in response to a foreign red cell antigen is usually not harmful but does require subsequent transfusions to be antigen-negative. Rh-positive blood should never be given to Rh-negative females before or during the childbearing age unless Rh negative blood is not available and the transfusion is lifesaving. If such a woman subsequently became pregnant with an Rh-positive fetus, she might form anti-Rh antibody, even though the pregnancy was the first, and the child might develop erythroblastosis fetalis (hemolytic disease of the newborn).

    Care must be taken not to give a transfusion unless the cells of the donor have been tested against the recipient's serum. If this compatibility test indicates the presence of antibodies in the recipient's serum for the antigens carried by the donor's cells, the blood is not suitable for transfusion because an unfavourable reaction might occur. The test for compatibility is called the direct match test. It involves testing the recipient's serum with the donor's cells and by the indirect Coombs test. These are adequate screening tests for most naturally occurring and immune antibodies.

    If, in spite of all the compatibility tests, a reaction does occur after the transfusion is given (the unfavourable reaction often manifests itself in the form of a fever), an even more careful search must be made for any red cell antibody that might be the cause. A reaction after transfusion is not necessarily due to red cell antigen-antibody reactions. It could be caused by the presence of antibodies to the donor's platelets or white cells. Transfusion reactions are a particular hazard for persons requiring multiple transfusions.

     

    Organ transplants

    The ABO antigens are widely distributed throughout the tissues of the body. Therefore, when organs such as kidneys are transplanted, most surgeons prefer to use organs that are matched to the recipient's with respect to the ABO antigen system, although the occasional survival of a grafted ABO-incompatible kidney has occurred. The remaining red cell antigen systems are not relevant in organ transplantation.

     

    Paternity testing

    tableAlthough blood group studies cannot be used to prove paternity, they can provide unequivocal evidence that a male is not the father of a particular child. Since the red cell antigens are inherited as dominant traits, a child cannot have a blood group antigen that is not present in one or both parents. For example, if the child in question belongs to group A and both the mother and the putative father are group O, the man is excluded from paternity. The table shows the phenotypes (observed characters) of the offspring that can and cannot be produced in the matings on the ABO system, considering only the three alleles (alternative genes) A, B, and O. Similar inheritance patterns are seen in all blood group systems. Furthermore, if one parent is genetically homozygous for a particular antigen—that is, has inherited the gene for it from both the grandfather and grandmother of the child—then that antigen must appear in the blood of the child. For example, on the MN system, a father whose phenotype is M and whose genotype is MM (in other words, a man who is of blood type M and has inherited the characteristic from both parents) will transmit an M allele to all his progeny.

    In medicolegal work it is important that the blood samples are properly identified. By using multiple red cell antigen systems and adding additional studies on other blood types (HLA [human leukocyte antigen], red cell enzymes, and plasma proteins), it is possible to state with a high degree of statistical certainty that a particular male is the father.

     

    Blood groups and disease

    In some cases an increased incidence of a particular antigen seems to be associated with a certain disease. Stomach cancer is more common in people of group A than in those of groups O and B. Duodenal ulceration is more common in nonsecretors of ABH substances than in secretors. For practical purposes, however, these statistical correlations are unimportant. There are other examples that illustrate the importance of blood groups to the normal functions of red cells.

    In persons who lack all Rh antigens, red cells of altered shape (stomatocytes) and a mild compensated hemolytic anemia are present. The McLeod phenotype (weak Kell antigens and no Kx antigen) is associated with acanthocytosis (a condition in which red cells have thorny projections) and a compensated hemolytic anemia. There is evidence that Duffy-negative human red cells are resistant to infection by Plasmodium knowlesi, a simian malaria parasite. Other studies indicate that P. falciparum receptors may reside on glycophorin A and may be related to the Wrb antigen.

    Blood group incompatibility between mother and child can cause erythroblastosis fetalis (hemolytic disease of the newborn). In this disease IgG blood group antibody molecules cross the placenta, enter the fetal circulation, react with the fetal red cells, and destroy them. Only certain blood group systems cause erythroblastosis fetalis, and the severity of the disease in the fetus varies greatly. ABO incompatibility usually leads to mild disease. Rh, or D antigen, incompatibility is now largely preventable by treating Rh-negative mothers with Rh immunoglobulin, which prevents immunization (forming antibodies) to the D antigen. Many other Rh antigens, as well as other red cell group antigens, cause erythroblastosis fetalis. The baby may be anemic at birth, which can be treated by transfusion with antigen-negative red cells. Even total exchange transfusion may be necessary. In some cases, transfusions may be given while the fetus is still within the uterus (intrauterine transfusion). Hyperbilirubinemia (an increased amount of bilirubin, a breakdown product of hemoglobin, in the blood) may lead to neurological deficits. Exchange transfusion eliminates most of the hemolysis by providing red cells, which do not react with the antibody. It also decreases the amount of antibody and allows the child to recover from the disease. Once the antibody disappears, the child's own red cells survive normally.

     

    Genetic and evolutionary significance of blood groups

    Blood groups and genetic linkage

    Red cell groups act as markers (inherited characteristics) for genes present on chromosomes, which are responsible for their expression. The site of a particular genetic system on a chromosome is called a locus. Each locus may be the site of several alleles (alternative genes). In an ordinary cell of the human body, there are 46 chromosomes arranged in 23 pairs, 22 pairs of which are autosomes (chromosomes other than sex chromosomes), with the remaining pair being the sex chromosomes, designated XX in females and XY in males. The loci of the blood group systems are on the autosomes, except for Xg, which is unique among the blood groups in being located on the X chromosome. Genes carried by the X chromosome are said to be sex-linked. Since the blood groups are inherited in a regular fashion, they can be used as genetic markers in family studies to investigate whether any two particular loci are sited on the same chromosome—i.e., are linked. The genes sited at loci on the same chromosome travel together from parent to child, and, if the loci are close together, the genes will rarely be separated.

    tableLoci that are farther apart can be separated by recombination. This happens when material is exchanged between homologous chromosomes (pair of chromosomes) by crossing over during the process of cell division (mitosis). The reproductive cells contain half the number of chromosomes of the rest of the body; ova carry an X chromosome and spermatozoa an X or a Y. The characteristic number of 46 chromosomes is restored at fertilization. In a classical pedigree linkage study, all the members of a family are examined for a test character and for evidence of the nonindependent segregation of pairs of characters. The results must be assessed statistically to determine linkage. Individual chromosomes are identified by the banding patterns revealed by different staining techniques. Segments of chromosomes or chromosomes that are aberrant in number and morphology may be precisely identified. Other methods for localizing markers on chromosomes include somatic cell hybridization (cell culture with alignment of single strands of RNA and DNA) and use of DNA probes (strands of radiolabeled DNA). These methods are useful in classical linkage studies to locate blood group loci. The loci for many red cell groups have been found on chromosomes and in many cases have been further localized on a particular chromosome. Their chromosome assignments and linkage to genes as well as associated abnormalities are outlined in the table.

    In some of the blood group systems, the amount of antigen produced depends on the genetic constitution. The ABO blood group gene codes for a specific carbohydrate transferase enzyme that catalyzes the addition of specific sugars onto a precursor substance. As a new sugar is added, a new antigen is produced. Antigens in the MNSs blood system are the products of genes that control terminal amino acid sequence. The amount of antigen present may depend on the amount of gene product inherited or on the activity of the gene product (i.e., transferase). The red cells of a person whose genotype is MM show more M antigen than do MN red cells. In the case of ABO, the same mechanism may also play a role in antigen expression, but specific activity of the inherited transferase may be more important.

    The amount of antigen produced can also be influenced by the position of the genes. Such effects within a genetic complex can be due to determinants on the same chromosome—they are then said to be cis—or to determinants on the opposite chromosome of a chromosome pair—trans.

    In the Rh combination cdE/cde, more E antigen is produced than in the combination cDE/cde. This may be due to the suppressor effect of D on E. An example of suppression in the trans situation is that more C antigen is detectable on the red cells from CDe/cde donors than on those of CDe/cDE people. The inheritance of the Rh system probably depends on the existence of operator genes, which turn the activity of closely linked structural genes on or off. The operator genes are themselves controlled by regulator genes. The operator genes are responsible for the quantity of Rh antigens, while the structural genes are responsible for their qualitative characteristics.

    The detection of recombination (exchange of material between chromosomes) or mutation in human families is complicated by questions of paternity. In spite of the large number of families that have been studied, it is an extremely rare occurrence. The paucity of examples may indicate that the recombinant and mutation rate for blood group genes is lower than that estimated for other human genes.

     

    Blood groups and population groups

    The blood groups are found in all human populations but vary in frequency. An analysis of populations yields striking differences in the frequency of some blood group genes. The frequency of the A gene is the highest among Australian Aborigines, the Blackfoot Indians of Montana in the United States, and the Sami people of northern Scandinavia. The O gene is common throughout the world, particularly among peoples of South and Central America. The maximum frequency of the B gene occurs in Central Asia and northern India. On the Rh system most northern and central European populations differ from each other only slightly and are characterized by a cde (r) frequency of about 40 percent. Africans show a preponderance of the complex cDe, and the frequency of cde is about 20 percent. In eastern Asia cde is almost wholly absent, and, since everyone has the D antigen, erythroblastosis fetalis (due to the presence of maternal anti-D) is unknown in these populations.

    The blood group frequencies in small inbred populations reflect the influences of genetic drift. In a small community an allele can be lost from the genetic pool if persons carrying it happen to be infertile, while it can increase in frequency if advantage exists. It has been suggested, for example, that B alleles were lost by chance from Native Americans and Australian Aborigines when these communities were small. There are pronounced discrepancies in blood group frequencies between the people of eastern Asia and the aboriginal peoples of the Americas. Other blood group frequencies in different populations show that ancestors might share some common attribute indicating a close resemblance between populations.

    Nonhuman primates carry blood group antigens that can be detected with reagents used for typing human beings. The closer their evolutionary relationship to humans, the greater their similarity with respect to antigens. The red cells of the apes, with the exception of the gorilla, have ABO antigens that are indistinguishable from those of human cells. Chimpanzees and orangutans are most frequently group A, but groups O, B, and AB are represented. Gibbons can be of any group except O, and gorillas have a B-like antigen that is not identical in activity with the human one. In both Old and New World monkeys, the red cells do not react with anti-A or with anti-B, but, when the secretions are examined, A and B substances and agglutinins are present in the serum. As far as the Rh system is concerned, chimpanzees carry two Rh antigens—D and c (hr′)—but not the others, whereas gibbons have only c (hr′). The red cells of monkeys do not give clear-cut reactions with human anti-Rh sera.

    Genetic , Human

    Genetic , Human

    Introduction

    study of the inheritance of characteristics by children from parents. Inheritance in humans does not differ in any fundamental way from that in other organisms.

    The study of human heredity occupies a central position in genetics. Much of this interest stems from a basic desire to know who humans are and why they are as they are. At a more practical level, an understanding of human heredity is of critical importance in the prediction, diagnosis, and treatment of diseases that have a genetic component. The quest to determine the genetic basis of human health has given rise to the field of medical genetics. In general, medicine has given focus and purpose to human genetics, so that the terms medical genetics and human genetics are often considered synonymous.

     

    The human chromosomes

    A new era in cytogenetics, the field of investigation concerned with studies of the chromosomes, began in 1956 with the discovery by Jo Hin Tjio and Albert Levan that human somatic cells contain 23 pairs of chromosomes. Since that time the field has advanced with amazing rapidity and has demonstrated that human chromosome aberrations rank as major causes of fetal death and of tragic human diseases, many of which are accompanied by mental retardation. Since the chromosomes can be delineated only during mitosis, it is necessary to examine material in which there are many dividing cells. This can usually be accomplished by culturing cells from the blood or skin, since only the bone marrow cells (not readily sampled except during serious bone marrow disease such as leukemia) have sufficient mitoses in the absence of artificial culture. After growth, the cells are fixed on slides and then stained with a variety of DNA-specific stains that permit the delineation and identification of the chromosomes. The Denver system of chromosome classification, established in 1959, identified the chromosomes by their length and the position of the centromeres. Since then the method has been improved by the use of special staining techniques that impart unique light and dark bands to each chromosome. These bands permit the identification of chromosomal regions that are duplicated, missing, or transposed to other chromosomes.

    Micrographs showing the karyotypes (i.e., the physical appearance of the chromosome) of a male and female have been produced. In a typical micrograph the 46 human chromosomes (the diploid number) are arranged in homologous pairs, each consisting of one maternally derived and one paternally derived member. The chromosomes are all numbered except for the X and the Y chromosomes, which are the sex chromosomes. In humans, as in all mammals, the normal female has two X chromosomes and the normal male has one X chromosome and one Y chromosome. The female is thus the homogametic sex, as all her gametes normally have one X chromosome. The male is heterogametic, as he produces two types of gametes—one type containing an X chromosome and the other containing the Y chromosome. There is good evidence that the Y chromosome in humans, unlike that in Drosophila, is necessary (but not sufficient) for maleness.

     

    Fertilization, sex determination, and differentiation

    A human individual arises through the union of two cells, an egg from the mother and a sperm from the father. Human egg cells are barely visible to the naked eye. They are shed, usually one at a time, from the ovary into the oviducts (fallopian tubes), through which they pass into the uterus. Fertilization, the penetrance of an egg by a sperm, occurs in the oviducts. This is the main event of sexual reproduction and determines the genetic constitution of the new individual.

    Human sex determination is a genetic process that depends basically on the presence of the Y chromosome in the fertilized egg. This chromosome stimulates a change in the undifferentiated gonad into that of the male (a testicle). The gonadal action of the Y chromosome is mediated by a gene located near the centromere; this gene codes for the production of a cell surface molecule called the H-Y antigen. Further development of the anatomic structures, both internal and external, that are associated with maleness is controlled by hormones produced by the testicle. The sex of an individual can be thought of in three different contexts: chromosomal sex, gonadal sex, and anatomic sex. Discrepancies among these, especially the latter two, result in the development of individuals with ambiguous sex, often called hermaphrodites. The phenomenon of homosexuality is of uncertain cause and is unrelated to the above sex-determining factors. It is of interest that in the absence of a male gonad (testicle) the internal and external sex anatomy is always female, even in the absence of a female ovary. A female without ovaries will, of course, be infertile and will not experience any of the female developmental changes normally associated with puberty. Such a female will often have Turner's syndrome.

    If X-containing and Y-containing sperm are produced in equal numbers, then according to simple chance one would expect the sex ratio at conception (fertilization) to be half boys and half girls, or 1 : 1. Direct observation of sex ratios among newly fertilized human eggs is not yet feasible, and sex-ratio data are usually collected at the time of birth. In almost all human populations of newborns there is a slight excess of males; about 106 boys are born for each 100 girls. Throughout life, however, there is a slightly greater mortality of males; this slowly alters the sex ratio until, beyond the age of about 50 years, there is an excess of females. Studies indicate that male embryos suffer a relatively greater degree of prenatal mortality, so that the sex ratio at conception might be expected to favour males even more than the 106 : 100 ratio observed at birth would suggest. Firm explanations for the apparent excess of male conceptions have not been established; it is possible that Y-containing sperm survive better within the female reproductive tract, or that they may be a little more successful in reaching the egg in order to fertilize it. In any case, the sex differences are small, the statistical expectation for a boy (or girl) at any single birth still being close to one out of two.

    During gestation—the period of nine months between fertilization and the birth of the infant—a remarkable series of developmental changes occur. Through the process of mitosis, the total number of cells changes from one (the fertilized egg) to about 2 × 1011. In addition, these cells differentiate into hundreds of different types with specific functions (liver cells, nerve cells, muscle cells, etc.). A multitude of regulatory processes, both genetically and environmentally controlled, accomplish this differentiation. Elucidation of the exquisite timing of these processes remains one of the great challenges of human biology.

     

    Immunogenetics

    Immunity is the ability of an individual to recognize the “self” molecules that make up one's own body and to distinguish them from such “non-self” molecules as those found in infectious microorganisms and toxins. This process has a prominent genetic component. Knowledge of the genetic and molecular basis of the mammalian immune system has increased in parallel with the explosive advances made in somatic cell and molecular genetics.

    There are two major components of the immune system, both originating from the same precursor “stem” cells. The bursa component provides B lymphocytes, a class of white blood cells that, when appropriately stimulated, differentiate into plasma cells. These latter cells produce circulating soluble proteins called antibodies or immunoglobulins. Antibodies are produced in response to substances called antigens, most of which are foreign proteins or polysaccharides. An antibody molecule can recognize a specific antigen, combine with it, and initiate its destruction. This so-called humoral immunity is accomplished through a complicated series of interactions with other molecules and cells; some of these interactions are mediated by another group of lymphocytes, the T lymphocytes, which are derived from the thymus gland. Once a B lymphocyte has been exposed to a specific antigen, it “remembers” the contact so that future exposure will cause an accelerated and magnified immune reaction. This is a manifestation of what has been called immunological memory.

    The thymus component of the immune system centres on the thymus-derived T lymphocytes. In addition to regulating the B cells in producing humoral immunity, the T cells also directly attack cells that display foreign antigens. This process, called cellular immunity, is of great importance in protecting the body against a variety of viruses as well as cancer cells. Cellular immunity is also the chief cause of the rejection of organ transplants. The T lymphocytes provide a complex network consisting of a series of helper cells (which are antigen specific), amplifier cells, suppressor cells, and cytotoxic (killer) cells, all of which are important in immune regulation.

     

    The genetics of antibody formation

    One of the central problems in understanding the genetics of the immune system has been in explaining the genetic regulation of antibody production. Immunobiologists have demonstrated that the system can produce well over 1,000,000 specific antibodies, each corresponding to a particular antigen. It would be difficult to envisage that each antibody is encoded by a separate gene—such an arrangement would require a disproportionate share of the entire human genome. Recombinant DNA analysis has illuminated the mechanisms by which a limited number of immunoglobulin genes can encode this vast number of antibodies.

    Each antibody molecule consists of several different polypeptide chains—the light chains (L) and the longer heavy chains (H). The latter determine to which of five different classes (IgM, IgG, IgA, IgD, or IgE) an immunoglobulin belongs. Both the L and H chains are unique among proteins in that they contain constant and variable parts. The constant parts have relatively identical amino acid sequences in any given antibody. The variable parts, on the other hand, have different amino acid sequences in each antibody molecule. It is the variable parts, then, that determine the specificity of the antibody.

    Recombinant DNA studies of immunoglobulin genes in mice have revealed that the light-chain genes are encoded in four separate parts in germline DNA: a leader segment (L), a variable segment (V), a joining segment (J), and a constant segment (C). These segments are widely separated in the DNA of an embryonic cell, but in a mature B lymphocyte they are found in relative proximity (albeit separated by introns). The mouse has more than 200 light-chain variable region genes, only one of which will be incorporated into the proximal sequence that codes for the antibody production in a given B lymphocyte. Antibody diversity is greatly enhanced by this system, as the V and J segments rearrange and assort randomly in each B-lymphocyte precursor cell. The mechanisms by which this DNA rearrangement takes place are not clear, but transposons are undoubtedly involved. Similar combinatorial processes take place in the genes that code for the heavy chains; furthermore, both the light-chain and heavy-chain genes can undergo somatic mutations to create new antibody-coding sequences. The net effect of these combinatorial and mutational processes enables the coding of millions of specific antibody molecules from a limited number of genes. It should be stressed, however, that each B lymphocyte can produce only one antibody. It is the B lymphocyte population as a whole that produces the tremendous variety of antibodies in humans and other mammals.

    Plasma cell tumours (myelomas) have made it possible to study individual antibodies since these tumours, which are descendants of a single plasma cell, produce one antibody in abundance. Another method of obtaining large amounts of a specific antibody is by fusing a B lymphocyte with a rapidly growing cancer cell. The resultant hybrid cell, known as a hybridoma, multiplies rapidly in culture. Since the antibodies obtained from hybridomas are produced by clones derived from a single lymphocyte, they are called monoclonal antibodies.

     

    The genetics of cellular immunity

    As has been stated, cellular immunity is mediated by T lymphocytes that can recognize infected body cells, cancer cells, and the cells of a foreign transplant. The control of cellular immune reactions is provided by a linked group of genes, known as the major histocompatibility complexmajor histocompatibility antigens, which are found on the surface of almost all nucleated somatic cells. The major histocompatibility antigens were first discovered on the leukocytes (white blood cells) and are, therefore, usually referred to as the HLA (human leukocyte group A) antigens. (MHC). These genes code for the

    The advent of the transplantation of human organs in the 1950s made the question of tissue compatibility between donor and recipient of vital importance, and it was in this context that the HLA antigens and the MHC were elucidated. Investigators found that the MHC resides on the short arm of chromosome 6, on four closely associated sites designated HLA-A, HLA-B, HLA-C, and HLA-D. Each locus is highly polymorphic—i.e., each is represented by a great many alleles within the human gene pool. These alleles, like those of the ABO blood group system, are expressed in codominant fashion. Because of the large number of alleles at each HLA locus, there is an extremely low probability of any two individuals (other than siblings) having identical HLA genotypes. (Since a person inherits one chromosome 6 from each parent, siblings have a 25 percent probability of having received the same paternal and maternal chromosomes 6 and thus of being HLA matched.)

    Although HLA antigens are largely responsible for the rejection of organ transplants, it is obvious that the MHC did not evolve to prevent the transfer of organs from one person to another. Indeed, information obtained from the histocompatibility complex in the mouse (which is very similar in its genetic organization to that of the human) suggests that a primary function of the HLA antigens is to regulate the number of specific cytotoxic T killer cells, which have the ability to destroy virus-infected cells and cancer cells.

     
    Arthur Robinson

    The genetics of human blood

    More is known about the genetics of the blood than about any other human tissue. One reason for this is that blood samples can be easily secured and subjected to biochemical analysis without harm or major discomfort to the person being tested. Perhaps a more cogent reason is that many chemical properties of human blood display relatively simple patterns of inheritance.

     

    Blood types

    Certain chemical substances within the red blood cells (such as the ABO and MN substances noted above) may serve as antigens. When cells that contain specific antigens are introduced into the body of an experimental animal such as a rabbit, the animal responds by producing antibodies in its own blood.

    In addition to the ABO and MN systems, geneticists have identified about 14 blood-type gene systems associated with other chromosomal locations. The best known of these is the Rh system. The Rh antigens are of particular importance in human medicine. Curiously, however, their existence was discovered in monkeys. When blood from the rhesus monkey (hence the designation Rh) is injected into rabbits, the rabbits produce so-called Rh antibodies that will agglutinate not only the red blood cells of the monkey but the cells of a large proportion of human beings as well. Some people (Rh-negative individuals), however, lack the Rh antigen; the proportion of such persons varies from one human population to another. Akin to data concerning the ABO system, the evidence for Rh genes indicates that only a single chromosome locus (called r) is involved and is located on chromosome 1. At least 35 Rh alleles are known for the r location; basically the Rh-negative condition is recessive.

    A medical problem may arise when a woman who is Rh-negative carries a fetus that is Rh-positive. The first such child may have no difficulty, but later similar pregnancies may produce severely anemic newborn infants. Exposure to the red blood cells of the first Rh-positive fetus appears to immunize the Rh-negative mother, that is, she develops antibodies that may produce permanent (sometimes fatal) brain damage in any subsequent Rh-positive fetus. Damage arises from the scarcity of oxygen reaching the fetal brain because of the severe destruction of red blood cells. Measures are available for avoiding the severe effects of Rh incompatibility by transfusions to the fetus within the uterus; however, genetic counselling before conception is helpful so that the mother can receive Rh immunoglobulin immediately after her first and any subsequent pregnancies involving an Rh-positive fetus. This immunoglobulin effectively destroys the fetal red blood cells before the mother's immune system is stimulated. The mother thus avoids becoming actively immunized against the Rh antigen and will not produce antibodies that could attack the red blood cells of a future Rh-positive fetus.

     

    Serum proteins

    Human serum, the fluid portion of the blood that remains after clotting, contains various proteins that have been shown to be under genetic control. Study of genetic influences has flourished since the development of precise methods for separating and identifying serum proteins. These move at different rates under the impetus of an electrical field (electrophoresis), as do proteins from many other sources (e.g., muscle or nerve). Since the composition of a protein is specified by the structure of its corresponding gene, biochemical studies based on electrophoresis permit direct study of tissue substances that are only a metabolic step or two away from the genes themselves.

    Electrophoretic studies have revealed that at least one-third of the human serum proteins occur in variant forms. Many of the serum proteins are polymorphic, occurring as two or more variants with a frequency of not less than 1 percent each in a population. Patterns of polymorphic serum protein variants have been used to determine whether twins are identical (as in assessing compatibility for organ transplants) or whether two individuals are related (as in resolving paternity suits). Whether or not the different forms have a selective advantage is not generally known.

    Much attention in the genetics of substances in the blood has been centred on serum proteins called haptoglobins, transferrins (which transport iron), and gamma globulins (a number of which are known to immunize against infectious diseases). Haptoglobins appear to relate to two common alleles at a single chromosome locus; the mode of inheritance of the other two seems more complicated, about 18 kinds of transferrins having been described. Like blood-cell antigen genes, serum-protein genes are distributed worldwide in the human population in a way that permits their use in tracing the origin and migration of different groups of people.

     

    Hemoglobin

    Hundreds of variants of hemoglobin have been identified by electrophoresis, but relatively few are frequent enough to be called polymorphisms. Of the polymorphisms, the alleles for sickle-cell and thalassemia hemoglobins produce serious disease in homozygotes, whereas others (hemoglobins C, D, and E) do not. The sickle-cell polymorphism confers a selective advantage on the heterozygote living in a malarial environment; the thalassemia polymorphism provides a similar advantage.

     

    Influence of the environment

    As stated earlier in this article, gene expression occurs only after modification by the environment. A good example is the recessively inherited disease called galactosemia, in which the enzyme necessary for the metabolism of galactose—a component of milk sugar—is defective. The sole source of galactose in the infant's diet is milk, which in this instance is toxic. The treatment of this most serious disease in the neonate is to remove all natural forms of milk from the diet (environmental manipulation) and to substitute a synthetic milk lacking galactose. The infant will then develop normally but will never be able to tolerate foods containing lactose. If milk were not a major part of the infant's diet, however, the mutant gene would never be able to express itself, and galactosemia would be unknown.

    Another way of saying this is that no trait can exist or become actual without an environmental contribution. Thus, the old question of which is more important, heredity or environment, is without meaning. Both nature (heredity) and nurture (environment) are always important for every human attribute.

    But this is not to say that the separate contributions of heredity and environment are equivalent for each characteristic. Dark pigmentation of the iris of the eye, for example, is under hereditary control in that one or more genes specify the synthesis and deposition in the iris of the pigment (melanin). This is one character that is relatively independent of such environmental factors as diet or climate; thus, individual differences in eye colour tend to be largely attributable to hereditary factors rather than to ordinary environmental change.

    On the other hand, it is unwarranted to assume that other traits (such as height, weight, or intelligence) are as little affected by environment as is eye colour. It is very easy to gather information that tall parents tend, on the average, to have tall children (and that short parents tend to produce short children), properly indicating a hereditary contribution to height. Nevertheless, it is equally manifest that growth can be stunted in the environmental absence of adequate nutrition. The dilemma arises that only the combined, final result of this nature–nurture interaction can be directly observed. There is no accurate way (in the case of a single individual) to gauge the separate contributions of heredity and environment to such a characteristic as height. An inferential way out of this dilemma is provided by studies of twins.

     

    Fraternal twins

    Usually a fertile human female produces a single egg about once a month. Should fertilization occur (a zygote is formed), growth of the individual child normally proceeds after the fertilized egg has become implanted in the wall of the uterus (womb). In the unusual circumstance that two unfertilized eggs are simultaneously released by the ovaries, each egg may be fertilized by a different sperm cell at about the same time, become implanted, and grow, to result in the birth of twins.

    Twins formed from separate eggs and different sperm cells can be of the same or of either sex. No matter what their sex, they are designated as fraternal twins. This terminology is used to emphasize that fraternal twins are genetically no more alike than are siblings (brothers or sisters) born years apart. Basically they differ from ordinary siblings only in having grown side by side in the womb and in having been born at approximately the same time.

     

    Identical twins

    In a major nonfraternal type of twinning, only one egg is fertilized; but during the cleavage of this single zygote into two cells, the resulting pair somehow become separated. Each of the two cells may implant in the uterus separately and grow into a complete, whole individual. In laboratory studies with the zygotes of many animal species, it has been found that in the two-cell stage (and later) a portion of the embryo, if separated under the microscope by the experimenter, may develop into a perfect, whole individual. Such splitting occurs spontaneously at the four-cell stage in some organisms (e.g., the armadillo) and has been accomplished experimentally with the embryos of salamanders, among others.

    The net result of splitting at an early embryonic stage may be to produce so-called identical twins. Since such twins derive from the same fertilized egg, the hereditary material from which they originate is absolutely identical in every way, down to the last gene locus. While developmental and genetic differences between one “identical” twin and another still may arise through a number of processes (e.g., mutation), these twins are always found to be of the same sex. They are often breathtakingly similar in appearance, frequently down to very fine anatomic and biochemical details (although their fingerprints are differentiable).

     

    Diagnosis of twin types

    Since the initial event in the mother's body (either splitting of a single egg or two separate fertilizations) is not observed directly, inferential means are employed for diagnosing a set of twins as fraternal or identical. The birth of fraternal twins is frequently characterized by the passage of two separate afterbirths. In many instances, identical twins are followed by only a single afterbirth, but exceptions to this phenomenon are so common that this is not a reliable method of diagnosis.

    The most trustworthy method for inferring twin type is based on the determination of genetic similarity. By selecting those traits that display the least variation attributable to environmental influences (such as eye colour and blood types), it is feasible, if enough separate chromosome loci are considered, to make the diagnosis of twin type with high confidence. HLA antigens, which, as stated above, are very polymorphic, have become most useful in this regard.

     

    Inferences from twin studies

    Metric (quantitative) traits

    By measuring the heights of a large number of ordinary siblings (brothers and sisters) and of twin pairs, it may be shown that the average difference between identical twins is less than half the difference for all other siblings. Any average differences between groups of identical twins are attributable with considerable confidence to the environment. Thus, since the sample of identical twins who were reared apart (in different homes) differed little in height from identicals who were raised together, it appears that environmental–genetic influences on that trait tended to be similar for both groups.

    Yet, the data for like-sexed fraternal twins reveal a much greater average difference in height (about the same as that found for ordinary siblings reared in the same home at different ages). Apparently the fraternal twins were more dissimilar than identicals (even though reared together) because the fraternals differed more among themselves in genotype. This emphasizes the great genetic similarity among identicals. Such studies can be particularly enlightening when the effects of individual genes are obscured or distorted by the influence of environmental factors on quantitative (measurable) traits (e.g., height, weight, and intelligence).

    Any trait that can be objectively measured among identical and fraternal twins can be scrutinized for the particular combination of hereditary and environmental influences that impinge upon it. The effect of environment on identical twins reared apart is suggested by their relatively great average difference in body weight as compared with identical twins reared together. Weight appears to be more strongly modified by environmental variables than is height.

    Study of comparable characteristics among farm animals and plants suggests that such quantitative human traits as height and weight are affected by allelic differences at a number of chromosome locations: that they are not simply affected by genes at a single locus. Investigation of these gene systems with multiple locations (polygenic systems) is carried out largely through selective-breeding experiments among large groups of plants and lower animals. Human beings select their mates in a much freer fashion, of course, and polygenic studies among people are thus severely limited.

    Intelligence is a very complex human trait, the genetics of which has been a subject of controversy for some time. Much of the controversy arises from the fact that intelligence is so difficult to define. Information has been based almost entirely on scores on standardized IQ tests constructed by psychologists; in general such tests do not take into account cultural, environmental, and educational differences. As a result, the working definition of intelligence has been “the general factor common to a large number of diverse cognitive (IQ) tests.” Even roughly measured as IQ, intelligence shows a strong contribution from the environment. Fraternal twins, however, show relatively great dissimilarity in IQ, suggesting an important contribution from heredity as well. In fact, it has been estimated that on the average between 60 and 80 percent of the variance in IQ test scores could be genetic. It is important to note that intelligence is polygenically inherited and that it has the highest degree of assortative mating of any trait; in other words, people tend to mate with people having similar IQ's. Moreover, twin studies involving psychological traits should be viewed with caution; for example, since identical twins tend to be singled out for special attention, their environment should not be considered equivalent even to that of other children raised in their own family.

    Since the time of Galton, generalizations have been repeatedly made about racial differences in intelligence, with claims of genetic superiority of some races over others. These generalizations fail to recognize that races are composed of individuals, each of whom has a unique genotype made up by genes shared with other humans, and that the sources of intraracial variation are more numerous than those producing interracial differences.

     

    Other traits

    For traits of a more qualitative (all-or-none) nature, the twin method can also be used in efforts to assess the degree of hereditary contribution. Such investigations are based on an examination of cases in which at least one member of the twin pair shows the trait. It was found in one study, for example, that in about 80 percent of all identical twin pairs in which one twin shows symptoms of the psychiatric disorder called schizophrenia, the other member of the pair also shows the symptoms (that is, the two are concordant for the schizophrenic trait). In the remaining 20 percent, the twins are discordant (that is, one lacks the trait). Since identical twins often have similar environments, this information by itself does not distinguish between the effects of heredity and environment. When pairs of like-sexed fraternal twins reared together are studied, however, the degree of concordance for schizophrenia is very much lower—only about 15 percent.

    Schizophrenia thus clearly develops much more easily in some genotypes than among others; this indicates a strong hereditary predisposition to the development of the trait. Schizophrenia also serves as a good example of the influence of environmental factors since concordance for the condition does not appear in 100 percent of identical twins.

    Studies of concordance and discordance between identical and fraternal twins have been carried out for many other human characteristics. It has, for example, been known for many years that tuberculosis is a bacterial infection of environmental origin. Yet identical twins raised in the same home show concordance for the disease far more often than do fraternal twins. This finding seems to be explained by the high degree of genetic similarity among the identical twins. While the tuberculosis germ is not inherited, heredity does seem to make one more (or less) susceptible to this particular infection. Thus, the genes of one individual may provide the chemical basis for susceptibility to a disease, while the genes of another may fail to do so.

    Indeed, there seem to be genetic differences among disease germs themselves that result in differences in their virulence. Thus, whether a genetically susceptible person actually develops a disease also depends in part on the heredity of the particular strain of bacteria or virus with which he or she must cope. Consequently, unless environmental factors such as these are adequately evaluated, the conclusions drawn from susceptibility studies can be unfortunately misleading.

    The above discussion should help to make clear the limits of genetic determinism. The expression of the genotype can always be modified by the environment. It can be argued that all human illnesses have a genetic component and that the basis of all medical therapy is environmental modification. Specifically, this is the hope for the management of genetic diseases. The more that can be learned about the basic molecular and cellular dysfunctions associated with such diseases, the more amenable they will be to environmental manipulation.

    Biochemistry

    Biochemistry
    Introduction

    study of the chemical substances and processes that occur in plants, animals, and

    microorganisms and of the changes they undergo during development and life. It

    deals with the chemistry of life, and as such it draws on the techniques of analytical,

    organic, and physical chemistry, as well as those of physiologists concerned with

    the molecular basis of vital processes. All chemical changes within the organism—

    either the degradation of substances, generally to gain necessary energy, or the

    buildup of complex molecules necessary for life processes—are collectively termed

    metabolism. These chemical changes depend on the action of organic catalysts

    known as enzymes, and enzymes, in turn, depend for their existence on the genetic

    apparatus of the cell. It is not surprising, therefore, that biochemistry enters into the

    investigation of chemical changes in disease, drug action, and other aspects of

    medicine, as well as in nutrition, genetics, and agriculture.

    The term biochemistry is synonymous with two somewhat older terms: physiological

    chemistry and biological chemistry. Those aspects of biochemistry that deal with

    the chemistry and function of very large molecules (e.g., proteins and nucleic acids)

    are often grouped under the term molecular biology. Biochemistry is a young

    science, having been known under that term only since about 1900. Its origins,

    however, can be traced much further back; its early history is part of the early history

    of both physiology and chemistry.
     
    Historical background

    The particularly significant past events in biochemistry have been concerned with

    placing biological phenomena on firm chemical foundations.

    Before chemistry could contribute adequately to medicine and agriculture, however,

    it had to free itself from immediate practical demands in order to become a pure

    science. This happened in the period from about 1650 to 1780, starting with the work of

    Robert Boyle and culminating in that of Antoine-Laurent Lavoisier, the father of

    modern chemistry. Boyle questioned the basis of the chemical theory of his day and

    taught that the proper object of chemistry was to determine the composition of

    substances. His contemporary John Mayow observed the fundamental analogy

    between the respiration of an animal and the burning, or oxidation, of organic matter

    in air. Then, when Lavoisier carried out his fundamental studies on chemical

    oxidation, grasping the true nature of the process, he also showed, quantitatively,

    the similarity between chemical oxidation and the respiratory process.

    Photosynthesis was another biological phenomenon that occupied the attention of

    the chemists of the late 18th century. The demonstration, through the combined work

    of Joseph Priestley, Jan Ingenhousz, and Jean Senebier, that photosynthesis is

    essentially the reverse of respiration was a milestone in the development of

    biochemical thought.

    In spite of these early fundamental discoveries, rapid progress in biochemistry had

    to wait upon the development of structural organic chemistry, one of the great

    achievements of 19th-century science. A living organism contains many thousands

    of different chemical compounds. The elucidation of the chemical transformations

    undergone by these compounds within the living cell is a central problem of

    biochemistry. Clearly, the determination of the molecular structure of the organic

    substances present in living cells had to precede the study of the cellular

    mechanisms, whereby these substances are synthesized and degraded.

    There are few sharp boundaries in science, and the boundaries between organic

    and physical chemistry, on the one hand, and biochemistry, on the other, have

    always shown much overlap. Biochemistry has borrowed the methods and theories

    of organic and physical chemistry and applied them to physiological problems.

    Progress in this path was at first impeded by a stubborn misconception in scientific

    thinking—the error of supposing that the transformations undergone by matter in the

    living organism were not subject to the chemical and physical laws that applied to

    inanimate substances and that consequently these “vital” phenomena could not be

    described in ordinary chemical or physical terms. Such an attitude was taken by the

    vitalists, who maintained that natural products formed by living organisms could

    never be synthesized by ordinary chemical means. The first laboratory synthesis of

    an organic compound, urea, by Friedrich Wöhler in 1828, was a blow to the vitalists but

    not a decisive one. They retreated to new lines of defense, arguing that urea was

    only an excretory substance—a product of breakdown and not of synthesis. The

    success of the organic chemists in synthesizing many natural products forced

    further retreats of the vitalists. It is axiomatic in modern biochemistry that the

    chemical laws that apply to inanimate materials are equally valid within the living

    cell.

    At the same time that progress was being impeded by a misplaced kind of reverence

    for living phenomena, the practical needs of man operated to spur the progress of

    the new science. As organic and physical chemistry erected an imposing body of

    theory in the 19th century, the needs of the physician, the pharmacist, and the

    agriculturalist provided an ever-present stimulus for the application of the new

    discoveries of chemistry to various urgent practical problems.

    Two outstanding figures of the 19th century, Justus von Liebig and Louis Pasteur,

    were particularly responsible for dramatizing the successful application of

    chemistry to the study of biology. Liebig studied chemistry in Paris and carried back

    to Germany the inspiration gained by contact with the former students and

    colleagues of Lavoisier. He established at Giessen a great teaching and research

    laboratory, one of the first of its kind, which drew students from all over Europe.

    Besides putting the study of organic chemistry on a firm basis, Liebig engaged in

    extensive literary activity, attracting the attention of all scientists to organic

    chemistry and popularizing it for the layman as well. His classic works, published in

    the 1840s, had a profound influence on contemporary thought. Liebig described the

    great chemical cycles in nature. He pointed out that animals would disappear from

    the face of the Earth if it were not for the photosynthesizing plants, since animals

    require for their nutrition the complex organic compounds that can be synthesized

    only by plants. The animal excretions and the animal body after death are also

    converted by a process of decay to simple products that can be re-utilized only by

    plants.

    In contrast with animals, green plants require for their growth only carbon dioxide,

    water, mineral salts, and sunlight. The minerals must be obtained from the soil, and

    the fertility of the soil depends on its ability to furnish the plants with these essential

    nutrients. But the soil is depleted of these materials by the removal of successive

    crops; hence the need for fertilizers. Liebig pointed out that chemical analysis of

    plants could serve as a guide to the substances that should be present in fertilizers.

    Agricultural chemistry as an applied science was thus born.

    In his analysis of fermentation, putrefaction, and infectious disease, Liebig was less

    fortunate. He admitted the similarity of these phenomena but refused to admit that

    living organisms might function as the causative agents. It remained for Pasteur to

    clarify that matter. In the 1860s Pasteur proved that various yeasts and bacteria were

    responsible for “ferments,” substances that caused fermentation and, in some

    cases, disease. He also demonstrated the usefulness of chemical methods in

    studying these tiny organisms and was the founder of what came to be called

    bacteriology.

    Later, in 1877, Pasteur's ferments were designated as enzymes, and, in 1897, the

    German chemist E. Buchner clearly showed that fermentation could occur in a press

    juice of yeast, devoid of living cells. Thus a life process of cells was reduced by

    analysis to a nonliving system of enzymes. The chemical nature of enzymes

    remained obscure until 1926, when the first pure crystalline enzyme (urease) was

    isolated. This enzyme and many others subsequently isolated proved to be

    proteins, which had already been recognized as high-molecular-weight chains of

    subunits called amino acids.

    The mystery of how minute amounts of dietary substances known as the vitamins

    prevent diseases such as beriberi, scurvy, and pellagra became clear in 1935, when

    riboflavin (vitamin B2) was found to be an integral part of an enzyme. Subsequent

    work has substantiated the concept that many vitamins are essential in the

    chemical reactions of the cell by virtue of their role in enzymes.

    In 1929 the substance adenosine triphosphate (ATP) was isolated from muscle.

    Subsequent work demonstrated that the production of ATP was associated with

    respiratory (oxidative) processes in the cell. In 1940 F.A. Lipmann proposed that ATP is

    the common form of energy exchange in many cells, a concept now thoroughly

    documented. ATP has been shown also to be a primary energy source for muscular

    contraction.

    The use of radioactive isotopes of chemical elements to trace the pathway of

    substances in the animal body was initiated in 1935 by two U.S. chemists, R.

    Schoenheimer and D. Rittenberg. That technique provided one of the single most

    important tools for investigating the complex chemical changes that occur in life

    processes. At about the same time, other workers localized the sites of metabolic

    reactions by ingenious technical advances in the studies of organs, tissue slices,

    cell mixtures, individual cells, and, finally, individual cell constituents, such as nuclei,

    mitochondria, ribosomes, lysosomes, and membranes.

    In 1869 a substance was isolated from the nuclei of pus cells and was called nucleic

    acid, which later proved to be deoxyribonucleic acid (DNA), but it was not until 1944 that

    the significance of DNA as genetic material was revealed, when bacterial DNA was

    shown to change the genetic matter of other bacterial cells. Within a decade of that

    discovery, the double helix structure of DNA was proposed by Watson and Crick,

    providing a firm basis for understanding how DNA is involved in cell division and in

    maintaining genetic characteristics.

    Advances have continued since that time, with such landmark events as the first

    chemical synthesis of a protein, the detailed mapping of the arrangement of atoms in

    some enzymes, and the elucidation of intricate mechanisms of metabolic regulation,

    including the molecular action of hormones.
     
    Areas of study

    A description of life at the molecular level includes a description of all the complexly

    interrelated chemical changes that occur within the cell—i.e., the processes known

    as intermediary metabolism. The processes of growth, reproduction, and heredity,

    also subjects of the biochemist's curiosity, are intimately related to intermediary

    metabolism and cannot be understood independently of it. The properties and

    capacities exhibited by a complex multicellular organism can be reduced to the

    properties of the individual cells of that organism, and the behaviour of each

    individual cell can be understood in terms of its chemical structure and the chemical

    changes occurring within that cell. When all the chemical changes within a cell are

    completely described and understood, man will have achieved as complete an

    understanding of life as can be achieved by the intellect alone. Living processes are

    sufficiently complex, however, to guarantee the biochemist enough unsolved

    problems to last into the unforeseeable future.
     
    Chemical composition of living matter

    Every living cell contains, in addition to water and salts or minerals, a large number of

    organic compounds, substances composed of carbon combined with varying

    amounts of hydrogen and usually also of oxygen. Nitrogen, phosphorus, and sulfur

    are likewise common constituents. In general, the bulk of the organic matter of a cell

    may be classified as (1) protein, (2) carbohydrate, and (3) fat, or lipid. Nucleic acids and

    various other organic derivatives are also important constituents. Each class

    contains a great diversity of individual compounds. Many substances that cannot

    be classified in any of the above categories also occur, though usually not in large

    amounts.

    Proteins are fundamental to life, not only as structural elements (e.g., collagen) and to

    provide defense (as antibodies) against invading destructive forces but also

    because the essential biocatalysts are proteins. The chemistry of proteins is based

    on the researches of the German chemist Emil Fischer, whose work from 1882

    demonstrated that proteins are very large molecules, or polymers, built up of about 24

    amino acids. Proteins may vary in size from small—insulin with a molecular weight of

    5,700 (based on the weight of a hydrogen atom as 1)—to very large—molecules with

    molecular weights of more than 1,000,000. The first complete amino acid sequence was

    determined for the insulin molecule in the 1950s. By 1963 the chain of amino acids in the

    protein enzyme ribonuclease (molecular weight 12,700) had also been determined,

    aided by the powerful physical techniques of X-ray-diffraction analysis. In the 1960s,

    Nobel Prize winners J.C. Kendrew and M.F. Perutz, utilizing X-ray studies,

    constructed detailed atomic models of the proteins hemoglobin and myoglobin (the

    respiratory pigment in muscle), which were later confirmed by sophisticated

    chemical studies. The abiding interest of biochemists in the structure of proteins

    rests on the fact that the arrangement of chemical groups in space yields important

    clues regarding the biological activity of molecules.

    Carbohydrates include such substances as sugars, starch, and cellulose. The

    second quarter of the 20th century witnessed a striking advance in the knowledge of

    how living cells handle small molecules, including carbohydrates. The metabolism

    of carbohydrates became clarified during this period, and elaborate pathways of

    carbohydrate breakdown and subsequent storage and utilization were gradually

    outlined in terms of cycles (e.g., the Embden–Meyerhof glycolytic cycle and the Krebs

    cycle). The involvement of carbohydrates in respiration and muscle contraction was

    well worked out by the 1950s. Refinements of the schemes continue.

    Fats, or lipids, constitute a heterogeneous group of organic chemicals that can be

    extracted from biological material by nonpolar solvents such as ethanol, ether, and

    benzene. The classic work concerning the formation of body fat from carbohydrates

    was accomplished during the early 1850s. Those studies, and later confirmatory

    evidence, have shown that the conversion of carbohydrate to fat occurs

    continuously in the body. The liver is the main site of fat metabolism. Fat absorption

    in the intestine, studied as early as the 1930s, still is under investigation by

    biochemists. The control of fat absorption is known to depend upon a combination

    action of secretions of the pancreas and bile salts. Abnormalities of fat metabolism,

    which result in disorders such as obesity and rare clinical conditions, are the subject

    of much biochemical research. Equally interesting to biochemists is the association

    between high levels of fat in the blood and the occurrence of arteriosclerosis

    (“hardening” of the arteries).

    Nucleic acids are large, complex compounds of very high molecular weight present

    in the cells of all organisms and in viruses. They are of great importance in the

    synthesis of proteins and in the transmission of hereditary information from one

    generation to the next. Originally discovered as constituents of cell nuclei (hence

    their name), it was assumed for many years after their isolation in 1869 that they were

    found nowhere else. This assumption was not challenged seriously until the 1940s,

    when it was determined that two kinds of nucleic acid exist: deoxyribonucleic acid

    (DNA), in the nuclei of all cells and in some viruses; and ribonucleic acid (RNA), in the

    cytoplasm of all cells and in most viruses.

    The profound biological significance of nucleic acids came gradually to light during

    the 1940s and 1950s. Attention turned to the mechanism by which protein synthesis and

    genetic transmission was controlled by nucleic acids (see below Genes). During the

    1960s, experiments were aimed at refinements of the genetic code. Promising

    attempts were made during the late 1960s and early 1970s to accomplish duplication of

    the molecules of nucleic acids outside the cell—i.e., in the laboratory. By the mid-1980s

    genetic engineering techniques had accomplished, among other things, in vitro

    fertilization and the recombination of DNA (so-called gene splicing).
     
    Nutrition

    Biochemists have long been interested in the chemical composition of the food of

    animals. All animals require organic material in their diet, in addition to water and

    minerals. This organic matter must be sufficient in quantity to satisfy the caloric, or

    energy, requirements of the animals. Within certain limits, carbohydrate, fat, and

    protein may be used interchangeably for this purpose. In addition, however, animals

    have nutritional requirements for specific organic compounds. Certain essential

    fatty acids, about ten different amino acids (the so-called essential amino acids), and

    vitamins are required by many higher animals. The nutritional requirements of

    various species are similar but not necessarily identical; thus man and the guinea

    pig require vitamin C, or ascorbic acid, whereas the rat does not.

    That plants differ from animals in requiring no preformed organic material was

    appreciated soon after the plant studies of the late 1700s. The ability of green plants to

    make all their cellular material from simple substances—carbon dioxide, water, salts,

    and a source of nitrogen such as ammonia or nitrate—was termed photosynthesis.

    As the name implies, light is required as an energy source, and it is generally

    furnished by sunlight. The process itself is primarily concerned with the

    manufacture of carbohydrate, from which fat can be made by animals that eat plant

    carbohydrates. Protein can also be formed from carbohydrate, provided ammonia is

    furnished.

    In spite of the large apparent differences in nutritional requirements of plants and

    animals, the patterns of chemical change within the cell are the same. The plant

    manufactures all the materials it needs, but these materials are essentially similar to

    those that the animal cell uses and are often handled in the same way once they are

    formed. Plants could not furnish animals with their nutritional requirements if the

    cellular constituents in the two forms were not basically similar.
     
    Digestion

    The organic food of animals, including man, consists in part of large molecules. In the

    digestive tracts of higher animals, these molecules are hydrolyzed, or broken down,

    to their component building blocks. Proteins are converted to mixtures of amino

    acids, and polysaccharides are converted to monosaccharides. In general, all living

    forms use the same small molecules, but many of the large complex molecules are

    different in each species. An animal, therefore, cannot use the protein of a plant or of

    another animal directly but must first break it down to amino acids and then

    recombine the amino acids into its own characteristic proteins. The hydrolysis of

    food material is necessary also to convert solid material into soluble substances

    suitable for absorption. The liquefaction of stomach contents aroused the early

    interest of observers, long before the birth of modern chemistry, and the hydrolytic

    enzymes secreted into the digestive tract were among the first enzymes to be

    studied in detail. Pepsin and trypsin, the proteolytic enzymes of gastric and

    pancreatic juice, respectively, continue to be intensively investigated.

    The products of enzymatic action on the food of an animal are absorbed through the

    walls of the intestines and distributed to the body by blood and lymph. In organisms

    without digestive tracts, substances must also be absorbed in some way from the

    environment. In some instances simple diffusion appears to be sufficient to explain

    the transfer of a substance across a cell membrane. In other cases, however (e.g., in

    the case of the transfer of glucose from the lumen of the intestine to the blood),

    transfer occurs against a concentration gradient. That is, the glucose may move

    from a place of lower concentration to a place of higher concentration.

    In the case of the secretion of hydrochloric acid into gastric juice, it has been shown

    that active secretion is dependent on an adequate oxygen supply (i.e., on the

    respiratory metabolism of the tissue), and the same holds for absorption of salts by

    plant roots. The energy released during the tissue oxidation must be harnessed in

    some way to provide the energy necessary for the absorption or secretion. This

    harnessing is achieved by a special chemical coupling system. The elucidation of

    the nature of such coupling systems has been an objective of the biochemist.
     
    Blood

    One of the animal tissues that has always excited special curiosity is blood. Blood

    has been investigated intensively from the early days of biochemistry, and its

    chemical composition is known with greater accuracy and in more detail than that of

    any other tissue in the body. The physician takes blood samples to determine such

    things as the sugar content, the urea content, or the inorganic-ion composition of the

    blood, since these show characteristic changes in disease.

    The blood pigment hemoglobin has been intensively studied. Hemoglobin is

    confined within the blood corpuscles and carries oxygen from the lungs to the

    tissues. It combines with oxygen in the lungs, where the oxygen concentration is

    high, and releases the oxygen in the tissues, where the oxygen concentration is low.

    The hemoglobins of higher animals are related but not identical. In invertebrates,

    other pigments may take the place and function of hemoglobin. The comparative

    study of these compounds constitutes a fascinating chapter in biochemical

    investigation.

    The proteins of blood plasma also have been extensively investigated. The

    gamma-globulin fraction of the plasma proteins contains the antibodies of the blood

    and is of practical value as an immunizing agent. An animal develops resistance to

    disease largely by antibody production. Antibodies are proteins with the ability to

    combine with an antigen (i.e., an agent that induces their formation). When this agent

    is a component of a disease-causing bacterium, the antibody can protect an

    organism from infection by that bacterium. The chemical study of antigens and

    antibodies and their interrelationship is known as immunochemistry.
     
    Metabolism and hormones

    The cell is the site of a constant, complex, and orderly set of chemical changes

    collectively called metabolism. Metabolism is associated with a release of heat. The

    heat released is the same as that obtained if the same chemical change is brought

    about outside the living organism. This confirms the fact that the laws of

    thermodynamics apply to living systems just as they apply to the inanimate world.

    The pattern of chemical change in a living cell, however, is distinctive and different

    from anything encountered in nonliving systems. This difference does not mean that

    any chemical laws are invalidated. It instead reflects the extraordinary complexity of

    the interrelations of cellular reactions.

    Hormones, which may be regarded as regulators of metabolism, are investigated at

    three levels, to determine (1) their physiological effects, (2) their chemical structure,

    and (3) the chemical mechanisms whereby they operate. The study of the

    physiological effects of hormones is properly regarded as the province of the

    physiologist. Such investigations obviously had to precede the more analytical

    chemical studies. The chemical structures of thyroxine and adrenaline are known.

    The chemistry of the sex and adrenal hormones, which are steroids, has also been

    thoroughly investigated. The hormones of the pancreas—insulin and glucagon—and

    the hormones of the hypophysis (pituitary gland) are peptides (i.e., compounds

    composed of chains of amino acids). The structures of most of these hormones has

    been determined. The chemical structures of the plant hormones, auxin and

    gibberellic acid, which act as growth-controlling agents in plants, are also known.

    The first and second phases of the hormone problem thus have been well, though

    not completely, explored, but the third phase is still in its infancy. It seems likely that

    different hormones exert their effects in different ways. Some may act by affecting

    the permeability of membranes; others appear to control the synthesis of certain

    enzymes. Evidently some hormones also control the activity of certain genes.
     
    Genes

    Genetic studies have shown that the hereditary characteristics of a species are

    maintained and transmitted by the self-duplicating units known as genes, which are

    composed of nucleic acids and located in the chromosomes of the nucleus. One of

    the most fascinating chapters in the history of the biological sciences contains the

    story of the elucidation, in the mid-20th century, of the chemical structure of the genes,

    their mode of self-duplication, and the manner in which the deoxyribonucleic acid

    (DNA) of the nucleus causes the synthesis of ribonucleic acid (RNA), which, among

    its other activites, causes the synthesis of protein. Thus, the capacity of a protein to

    behave as an enzyme is determined by the chemical constitution of the gene (DNA)

    that directs the synthesis of the protein. The relationship of genes to enzymes has

    been demonstrated in several ways. The first successful experiments, devised by

    the Nobel Prize winners George W. Beadle and Edward L. Tatum, involved the bread

    mold Neurospora crassa; the two men were able to collect a variety of strains that

    differed from the parent strain in nutritional requirements. Such strains had

    undergone a mutation (change) in the genetic makeup of the parent strain. The

    mutant strains required a particular amino acid not required for growth by the parent

    strain. It was then shown that such a mutant had lost an enzyme essential for the

    synthesis of the amino acid in question. The subsequent development of

    techniques for the isolation of mutants with specific nutritional requirements led to a

    special procedure for studying intermediary metabolism.
     
    Evolution and origin of life

    The exploration of space beginning in the mid-20th century intensified speculation

    about the possibility of life on other planets. At the same time, man was beginning to

    understand some of the intimate chemical mechanisms used for the transmission of

    hereditary characteristics. It was possible, by studying protein structure in different

    species, to see how the amino acid sequences of functional proteins (e.g.,

    hemoglobin and cytochrome) have been altered during phylogeny (the

    development of species). It was natural, therefore, that biochemists should look upon

    the problem of the origin of life as a practical one. The synthesis of a living cell from

    inanimate material was not regarded as an impossible task for the future.
     
    Applied biochemistry

    An early objective in biochemistry was to provide analytical methods for the

    determination of various blood constituents because it was felt that abnormal levels

    might indicate the presence of metabolic diseases. The clinical chemistry

    laboratory now has become a major investigative arm of the physician in the

    diagnosis and treatment of disease and is an indispensable unit of every hospital.

    Some of the older analytical methods directed toward diagnosis of common

    diseases are still the most commonly used—for example, tests for determining the

    levels of blood glucose, in diabetes; urea, in kidney disease; uric acid, in gout; and

    bilirubin, in liver and gallbladder disease. With development of the knowledge of

    enzymes, determination of certain enzymes in blood plasma has assumed

    diagnostic value, such as alkaline phosphatase, in bone and liver disease; acid

    phosphatase, in prostatic cancer; amylase, in pancreatitis; and lactate

    dehydrogenase and transaminase, in cardiac infarct. Electrophoresis of plasma

    proteins is commonly employed to aid in the diagnosis of various liver diseases and

    forms of cancer. Both electrophoresis and ultracentrifugation of serum constituents

    (lipoproteins) are used increasingly in the diagnosis and examination of therapy of

    atherosclerosis and heart disease. Many specialized and sophisticated methods

    have been introduced, and machines have been developed for the simultaneous

    automated analysis of many different blood constituents in order to cope with

    increasing medical needs.

    Analytical biochemical methods have also been applied in the food industry to

    develop crops superior in nutritive value and capable of retaining nutrients during

    the processing and preservation of food. Research in this area is directed

    particularly to preserving vitamins as well as colour and taste, all of which may suffer

    loss if oxidative enzymes remain in the preserved food. Tests for enzymes are used

    for monitoring various stages in food processing.

    Biochemical techniques have been fundamental in the development of new drugs.

    The testing of potentially useful drugs includes studies on experimental animals

    and man to observe the desired effects and also to detect possible toxic

    manifestations; such studies depend heavily on many of the clinical biochemistry

    techniques already described. Although many of the commonly used drugs have

    been developed on a rather empirical (trial-and-error) basis, an increasing number of

    therapeutic agents have been designed specifically as enzyme inhibitors to

    interfere with the metabolism of a host or invasive agent. Biochemical advances in

    the knowledge of the action of natural hormones and antibiotics promise to aid

    further in the development of specific pharmaceuticals.
     
    Methods in biochemistry

    Like other sciences, biochemistry aims at quantifying, or measuring, results,

    sometimes with sophisticated instrumentation. The earliest approach to a study of

    the events in a living organism was an analysis of the materials entering an

    organism (foods, oxygen) and those leaving (excretion products, carbon dioxide).

    This is still the basis of so-called balance experiments conducted on animals, in

    which, for example, both foods and excreta are thoroughly analyzed. For this

    purpose many chemical methods involving specific colour reactions have been

    developed, requiring spectrum-analyzing instruments (spectrophotometers) for

    quantitative measurement. Gasometric techniques are those commonly used for

    measurements of oxygen and carbon dioxide, yielding respiratory quotients (the

    ratio of carbon dioxide to oxygen). Somewhat more detail has been gained by

    determining the quantities of substances entering and leaving a given organ and

    also by incubating slices of a tissue in a physiological medium outside the body and

    analyzing the changes that occur in the medium. Because these techniques yield

    an overall picture of metabolic capacities, it became necessary to disrupt cellular

    structure (homogenization) and to isolate the individual parts of the cell—nuclei,

    mitochondria, lysosomes, ribosomes, membranes—and finally the various enzymes

    and discrete chemical substances of the cell in an attempt to understand the

    chemistry of life more fully.
     
    Centrifugation and electrophoresis

    An important tool in biochemical research is the centrifuge, which through rapid

    spinning imposes high centrifugal forces on suspended particles, or even

    molecules in solution, and causes separations of such matter on the basis of

    differences in weight. Thus, red cells may be separated from plasma of blood, nuclei

    from mitochondria in cell homogenates, and one protein from another in complex

    mixtures. Proteins are separated by ultracentrifugation—very high speed spinning;

    with appropriate photography of the protein layers as they form in the centrifugal

    field, it is possible to determine the molecular weights of proteins.

    Another property of biological molecules that has been exploited for separation and

    analysis is their electrical charge. Amino acids and proteins possess net positive or

    negative charges according to the acidity of the solution in which they are dissolved.

    In an electric field, such molecules adopt different rates of migration toward

    positively (anode) or negatively (cathode) charged poles and permit separation.

    Such separations can be effected in solutions or when the proteins saturate a

    stationary medium such as cellulose (filter paper), starch, or acrylamide gels. By

    appropriate colour reactions of the proteins and scanning of colour intensities, a

    number of proteins in a mixture may be measured. Separate proteins may be

    isolated and identified by electrophoresis, and the purity of a given protein may be

    determined. (Electrophoresis of human hemoglobin revealed the abnormal

    hemoglobin in sickle-cell anemia, the first definitive example of a “molecular

    disease.”)
     
    Chromatography and isotopes

    The different solubilities of substances in aqueous and organic solvents provide

    another basis for analysis. In its earlier form, a separation was conducted in complex

    apparatus by partition of substances in various solvents. A simplified form of the

    same principle evolved as ‘‘paper chromatography,” in which small amounts of

    substances could be separated on filter paper and identified by appropriate colour

    reactions. In contrast to electrophoresis, this method has been applied to a wide

    variety of biological compounds and has contributed enormously to research in

    biochemistry.

    The general principle has been extended from filter paper strips to columns of other

    relatively inert media, permitting larger scale separation and identification of closely

    related biological substances. Particularly noteworthy has been the separation of

    amino acids by chromatography in columns of ion-exchange resins, permitting the

    determination of exact amino acid composition of proteins. Following such

    determination, other techniques of organic chemistry have been used to elucidate

    the actual sequence of amino acids in complex proteins. Another technique of

    column chromatography is based on the relative rates of penetration of molecules

    into beads of a complex carbohydrate according to size of the molecules. Larger

    molecules are excluded relative to smaller molecules and emerge first from a

    column of such beads. This technique not only permits separation of biological

    substances but also provides estimates of molecular weights.

    Perhaps the single most important technique in unravelling the complexities of

    metabolism has been the use of isotopes (heavy or radioactive elements) in

    labelling biological compounds and “tracing” their fate in metabolism. Measurement

    of the isotope-labelled compounds has required considerable technology in mass

    spectroscopy and radioactive detection devices.

    A variety of other physical techniques, such as nuclear magnetic resonance,

    electron spin spectroscopy, circular dichroism, and X-ray crystallography, have

    become prominent tools in revealing the relation of chemical structure to biological

    function.
     
    Elmer H. StotzBirgit Vennesland
    Additional Reading
    Overviews are provided by Thomas M. Devlin (ed.), Textbook of Biochemistry: With

    Clinical Correlation, 3rd ed. (1992), a good general textbook for medical and graduate

    students; Lubert Stryer, Biochemistry, 4th ed. (1995), with excellent illustrations; Albert

    L. Lehninger, David L. Nelson, and Michael M. Cox, Principles of Biochemistry, 2nd ed.

    (1993); and J. David Rawn, Biochemistry, international ed. (1989), a strong text still of great

    utility. Joseph Needham (ed.), The Chemistry of Life: Eight Lectures on the History of

    Biochemistry (1970), provides a brief development of the important areas of

    photosynthesis, enzymes, microbiology, neurology, hormones, vitamins, and other

    topics. Frederic Lawrence Holmes, Hans Krebs, 2 vol. (1991–93), is a dense biography of

    one of the founders of modern biochemistry. Robert E. Kohler, From Medical

    Chemistry to Biochemistry (1982), compares the growth of the discipline in the United

    States, Britain, and Germany.Particular topics are addressed in J. Etienne-Decant

    and F. Millot, Genetic Biochemistry: From Gene to Protein (1988; originally published

    in French, 1987), an overview of information flow from genes to proteins; Maria C.

    Lindner (ed.), Nutritional Biochemistry and Metabolism: With Clinical Applications,

    2nd ed. (1991), on the dynamic roles that nutrients play in the structure and function of

    the human body; and P.K. Stumpf and E.E. Conn (eds.), The Biochemistry of Plants: A

    Comprehensive Treatise (1980– ). Richard E. Dickerson and Irving Geis, The Structure

    and Action of Proteins (1969), treats one of the essential types of biochemical

    molecules used by cells. Proteins and other molecules also are described and

    illustrated in Linus Pauling and Roger Hayward, The Architecture of Molecules

    (1964).The subject of endocrinology has changed markedly amid the genetic

    revolution. A reliable work on this topic is Franklyn F. Bolander, Molecular

    Endocrinology, 2nd ed. (1994). D.G. Hardie, Biochemical Messengers: Hormones,

    Neurotransmitters, and Growth Factors (1991), also includes coverage of other

    signaling devices that have evolved at the molecular level.

    Medical education

    Medical education

    Introduction

    course of study directed toward imparting to persons seeking to become physicians the knowledge and skills required for the prevention and treatment of disease. It also develops the methods and objectives appropriate to the study of the still unknown factors that produce disease or favour well-being.

    Among the goals of medical education is the production of physicians sensitive to the health needs of their country, capable of ministering to those needs, and aware of the necessity of continuing their own education. It therefore follows that the plan of education, the medical curriculum, should not be the same in all countries. Although there may be basic elements common to all, the details should vary from place to place and from time to time. Whatever form the curriculum takes, ideally it will be flexible enough to allow modification as circumstances alter, medical knowledge grows, and needs change.

    Attention in this article is focused primarily on general medical education.

     

    History of medical education

    Although it is difficult to identify the origin of medical education, authorities usually consider that it began with the ancient Greeks' method of rational inquiry, which introduced the practice of observation and reasoning regarding disease. Rational interpretation and discussion, it is theorized, led to teaching and thus to the formation of schools such as that at Cos, where the Greek physician Hippocrates is said to have taught in the 5th century BC and originated the oath that became a credo for practitioners through the ages.

    Later, the Christian religion greatly contributed to both the learning and the teaching of medicine in the West because it favoured not only the protection and care of the sick but also the establishment of institutions where collections of sick people encouraged observation, analysis, and discussion among physicians by furnishing opportunities for comparison. Apprenticeship training in monastic infirmaries and hospitals dominated medical education during the early Middle Ages. A medical school in anything like its present form, however, did not evolve until the establishment of the one at Salerno in southern Italy between the 9th and 11th centuries. Even there teaching was by the apprentice system, but an attempt was made at systemization of the knowledge of the time, a series of health precepts was drawn up, and a form of registration to practice was approved by the Holy Roman emperor Frederick II. During the same period, medicine and medical education were flourishing in the Muslim world at such centres as Baghdad, Cairo, and Córdoba.

    With the rise of the universities in Italy and later in Cracow, Prague, Paris, Oxford, and elsewhere in western Europe, the teachers of medicine were in some measure drawn away from the life of the hospitals and were offered the attractions and prestige of university professorships and lectureships. As a result, the study of medicine led more often to a familiarity with theories about disease than with actual sick persons. However, the establishment in 1518 of the Royal College of Physicians of London, which came about largely through the energies of Thomas Linacre, produced a system that called for examination of medical practitioners. The discovery of the circulation of the blood by William Harvey provided a stimulus to the scientific study of the processes of the body, bringing some deemphasis to the tradition of theory and doctrine.

    Gradually, in the 17th and 18th centuries, the value of hospital experience and the training of the students' sight, hearing, and touch in studying disease were reasserted. In Europe, medical education began slowly to assume its modern character in the application of an increasing knowledge of natural science to the actual care of patients. There was also encouragement of the systematic study of anatomy, botany, and chemistry, sciences at that time considered to be the basis of medicine. The return to the bedside aided the hospitals in their long evolution from dwelling places of the poor, the diseased, and the infirm, maintained by charity and staffed usually by religious orders, into relatively well-equipped, well-staffed, efficient establishments that became available to the entire community and were maintained by private or public expense.

    It was not until the mid-19th century, however, that an ordered pattern of science-oriented teaching was established. This pattern, the traditional medical curriculum, was generally adopted by Western medical schools. It was based upon teaching, where the student mostly listens, rather than learning, where the student is more investigative. The clinical component, largely confined to hospitals (charitable institutions staffed by unpaid consultants), was not well organized. The new direction in medical education was aided in Britain by the passage of the Medical Act of 1858, which has been termed the most important event in British medicine. It established the General Medical Council, which thenceforth controlled admission to the medical register and thus had great powers over medical education and examinations. Further interest in medicine grew from these advances, which opened the way for the discoveries of Louis Pasteur, which showed the relation of microorganisms to certain diseases, Joseph Lister's application of Pasteur's concepts to surgery, and the studies of Rudolf Virchow and Robert Koch in cellular pathology and bacteriology.

    In the United States, medical education was greatly influenced by the example set in 1893 by the Johns Hopkins Medical School in Baltimore. It admitted only college graduates with a year's training in the natural sciences. Its clinical work was superior because the school was supplemented by the Johns Hopkins Hospital, created expressly for teaching and research carried on by members of the medical faculty. The adequacy of medical schools in the United States was improved after the Carnegie Foundation for the Advancement of Teaching published in 1910 a report by the educator Abraham Flexner. In the report, which had an immediate impact, he pointed out that medical education actually is a form of education rather than a mysterious process of professional initiation or apprenticeship. As such, it needs an academic staff, working full-time in their departments, whose whole responsibility is to their professed subject and to the students studying it. Medical education, the report further stated, needs laboratories, libraries, teaching rooms, and ready access to a large hospital, the administration of which should reflect the presence and influence of the academic staff. Thus the nature of the teaching hospital was also influenced. Aided by the General Education Board, the Rockefeller Foundation, and a large number of private donors, U.S. and Canadian medical education was characterized by substantial improvements from 1913 to 1929 in such matters as were stressed in the Flexner report.

     

    Modern patterns of medical education

    As medical education developed after the Flexner report was published, the distinctive feature was the thoroughness with which theoretical and scientific knowledge were fused with what experience teaches in the practical responsibility of taking care of human beings. Medical education eventually developed into a process that involved four generally recognized stages: premedical, undergraduate, postgraduate, and continuing education.

     

    Premedical education and admission to medical school

    In the United States, Britain, and the Commonwealth countries, generally, medical schools are inclined to limit the number of students admitted so as to increase the opportunities for each student. In western Europe, South America, and most other countries, no exact limitation of numbers of students is in effect, though there is a trend toward such limitation in some of the western European schools. Some medical schools in North America have developed ratios of teaching staff to students as high as 1 to 1 or 1 to 2, in contrast with 1 teacher to 20 or even 100 students in certain universities in other countries. The number of students applying to medical school greatly exceeds the number finally selected in most countries.

    Requirements to enter medical school, of course, vary from country to country, and in some countries, such as the United States, from university to university. Generally speaking, in Western universities, there is a requirement for a specified number of years of undergraduate work and passing of a test, possibly state regulated, and a transcript of grades. In the United States entry into medical school is highly competitive, especially in the more prestigious universities. Stanford University, for instance, accepts only about 5 percent of its applicants. Most U.S. schools require the applicant to take the Medical College Admission Test, which measures aptitude in medically related subjects. Other requirements may include letters of recommendation and a personal interview. Many U.S. institutions require a bachelor's degree or its equivalent from an undergraduate school. A specific minimum grade point average is not required, but most students entering medical school have between an A and a B average.

    The premedical courses required in most countries emphasize physics, chemistry, and biology. These are required in order to make it possible to present subsequently courses in anatomy, physiology, biochemistry, and pharmacology with precision and economy of time to students prepared in scientific method and content. Each of the required courses includes laboratory periods throughout the full academic year. Student familiarity with the use of instruments and laboratory procedures tends to vary widely from country to country, however.

     

    Undergraduate education

    The medical curriculum also varies from country to country. Most U.S. curriculums cover four years; in Britain five years is normal. The early part of the medical school program is sometimes called the preclinical phase. Medical schools usually begin their work with the study of the structure of the body and its formation: anatomy, histology, and embryology. Concurrently, or soon thereafter, come studies related to function—i.e., physiology, biochemistry, pharmacology, and, in many schools, biophysics. After the microscopic study of normal tissues (histology) has begun, the student is usually introduced to pathological anatomy, bacteriology, immunology, parasitology—in short, to the agents of disease and the changes that they cause in the structure and function of the tissues. Courses in medical psychology, biostatistics, public health, alcoholism, biomedical engineering, emergency medicine, ethical problems, and other less traditional courses are becoming more common in the first years of the medical curriculum.

    The two or more clinical years of an effective curriculum are characterized by active student participation in small group conferences and discussions, a decrease in the number of formal lectures, and an increase in the amount of contact with patients in teaching hospitals and clinics.

    Clinical work begins with general medicine and surgery and goes on to include the major clinical specialties, including obstetrics and gynecology, pediatrics, disorders of the eye, ear, nose, throat, and skin, and psychiatry. The student works in the hospital's outpatient, emergency, and radiology departments, diagnostic laboratories, and surgical theatres. The student also studies sciences closely related to medicine, such as pathology, microbiology, hematology, immunology, and clinical chemistry and becomes familiar with epidemiology and the methods of community medicine. Some knowledge of forensic (legal) medicine is also expected. During the clinical curriculum many students have an opportunity to pursue a particular interest of their own or to enlarge their clinical experience by working in a different environment, perhaps even in a foreign country—the so-called elective period. Most students find clinical work demanding, usually requiring long hours of continuous duty and personal commitment.

    In the United States after satisfactory completion of a course of study in an accredited medical school the degree of doctor of medicine (M.D.) or doctor of osteopathy (D.O.) is conferred. In Britain and some of the other Commonwealth countries the academic degree conferred after undergraduate studies are completed is bachelor of medicine and of surgery (or chirurgery), M.B., B.S. or M.B., CHb. Only after further study is the M.D. degree given. Similar degrees are conferred in other countries, although they are not always of the same status.

     

    Postgraduate education

    On completion of medical school, the physician usually seeks graduate training and experience in a hospital under the supervision of competent clinicians and other teachers. In Britain a year of resident hospital work is required after qualification and before admission to the medical register. In North America, the first year of such training has been known as an internship, but it is no longer distinguished in most hospitals from the total postgraduate period, called residency. After the first year physicians usually seek further graduate education and training to qualify themselves as specialists or to fulfill requirements for a higher academic degree. Physicians seeking special postgraduate degrees are sometimes called fellows.

     

    Continuing education

    The process by which physicians keep themselves up-to-date is called continuing education. It consists of courses and training opportunities of from a few days to several months in duration, designed to enable physicians to learn of new developments within their special areas of concern. Physicians also attend medical and scientific meetings, national and international conferences, discussion groups, and clinical meetings, and they read medical journals and other materials, all of which serve to keep them aware of progress in their chosen field. Although continuing education is not a formal process, organizations designed to promote continuing education have become common. In the United States the Accreditation Council for Continuing Medical Education was formed in 1985, and some certifying boards of medical specialties have stringent requirements for continuing education.

    The quality of medical education is supervised in many countries by councils appointed by the profession as a whole. In the United States these include the Council on Medical Education and the Liaison Committee on Medical Education, both affiliates of the American Medical Association, and the American Osteopathic Association. In Britain the statutory body is the General Medical Council, most of whose members are from the profession, although only a minority of the members are appointed by it. In other countries medical education may be regulated by an office or ministry of public instruction with, in some cases, the help of special professional councils.

     

    Medical school faculty

    As applied to clinical teachers the term full-time originally implied an educational ideal: that a clinician's salary from a university should be large enough to relieve him of any reason for seeing private patients for the sake of supplementing his salary by professional fees. Full-time came to be applied, however, to a variety of modifications; it could mean that a clinical professor might supplement his salary as a teacher up to a defined maximum, might see private patients only at his hospital office, or might see such patients only a certain number of hours per week. The intent of full-time has always been to place the teacher's capacities and strength entirely at the service of his students and the patients entrusted to his care as a teacher and investigator.

    Courses in the medical sciences have commonly followed the formula of three hours of lectures and six to nine hours of laboratory work per week for a three-, six-, or nine-month course. Instruction in clinical subjects, though retaining the formal lecture, have tended to diminish the time and emphasis allowed to lectures in favour of experience with and attendance on patients. Nonetheless, the level of lecturing and formal presentation remains high in some countries.

     

    Requirements for practice

    Graduation from medical school and postgraduate work does not always allow the physician to practice. In the United States, licensure to practice medicine is controlled by boards of licensure in each state. The boards set and conduct examinations of applicants to practice within the state, and they examine the credentials of applicants who want licenses earned in other states to be accepted in lieu of examination. The National Board of Medical Examiners holds examinations leading to a degree that is acceptable to most state boards. National laws regulating professional practice cannot be enacted in the United States. In Canada the Medical Council of Canada conducts examinations and enrolls successful candidates on the Canadian medical register, which the provincial governments accept as the main requirement for licensure. In Britain the medical register is kept by the General Medical Council, which supervises the licensing bodies; unregistered practice, however, is not illegal. In some European countries graduation from a state-controlled university or medical school in effect serves as a license to practice; the same is true for Japan.

     

    Economic aspects

    The income of a medical school is derived from four principal sources: (1) tuition and fees, (2) endowment income or appropriation from the government (taxation), (3) gifts from private sources, and (4) donation of teachers' services. Tuition or student fees are large in most English-speaking countries (except in U.S. state universities) and relatively small throughout the rest of the world. Tuition in most American schools, however, rarely makes up more than a small part of total operating expenses. The total cost of maintaining a medical school, if prorated among the students, would produce a figure many times greater than the tuition or other charges paid by each student. The costs of operating medical schools in the United States increased by about 30 times between the late 1950s and the mid-1980s.

    The expenses of medical education fall into two groups: those of the instruction given in the medical sciences and those connected with hospital teaching. In the medical sciences the costs of building maintenance, laboratory equipment and supplies, research expenses, salaries of teachers, and wages of employees are heavy but comparable to those in other departments of a university. In the clinical subjects all expenses in connection with the care of patients usually are considered as hospital expenses and are not carried on the medical school budget, which is normally reserved for the expenses of teaching and research. Here the heavy expenses are salaries of clinical teachers and the cost of studying cases of illness with a thoroughness appropriate to their use as teaching material.

    To a considerable degree in free-market countries, the cost of securing an adequate medical education has tended to exclude the student whose family cannot contribute a large share of tuition and living expenses for four to 10 years. This difficulty is offset in some medical schools by loan funds and scholarships, but these aids are commonly offered only in the second or subsequent years. In Britain scholarships and maintenance grants are available through state and local educational authority funds, so that an individual can secure a medical education even though the parents may not be able to afford its cost.

     

    Scientific and international aspects

    Medical education has the double task of passing on to students what is known and of attacking what is still unknown. The cost of medical research is borne by only a few; the benefits are shared by many. There are countries whose citizens are too poor to support physicians or to use them, countries that can support a few physicians but are too poor to maintain a good medical school, countries that can maintain medical schools where what is known can be taught but where no research can be carried out, and a few countries in which teaching and research in medicine can be carried on to the great advantage of the world at large.

    A medical school having close geographical as well as administrative relationships with the rest of the university of which it forms a part usually profits by this intimate and easy contact. Medicine cannot wisely be separated from the biological sciences, and it continues to gain immensely from chemistry, physics, mathematics, and psychology, as well as from modern technology. The social sciences contribute by making physicians aware of the need for better distribution of medical care. Contact with teachers and the advancing knowledge in other faculties also may have a corollary effect in advancing medicine.

    With the development of the World Health Organization (WHO) and the World Medical Association after World War II, there has been increasing international interest in medical education. WHO conducts a regular program for aiding countries in the development and expansion of their educational facilities. World War II showed the advantages and economy derived from satisfactory systems of medical education: defects and diseases were more widely and accurately detected among recruits than ever before, health and morale were effectively maintained among combatants, and disease and battle injuries were effectively treated.

     
    Alan GreggEdward Lewis TurnerHarold Scarborough

    Additional Reading

    Among the many books devoted to the subject of medical education are the following historical discussions: Abraham Flexner, Medical Education in the United States and Canada: A Report to the Carnegie Foundation for the Advancement of Teaching (1910, reprinted 1973); and Kenneth M. Ludmerer, Learning to Heal: The Development of American Medical Education (1985). For special information, see the following official publications: Association of American Medical Colleges, AAMC Directory of American Medical Education, 1986–87, 33rd ed. (1986), Medical School Admission Requirements, 1988–89, 38th ed. (1987), and Physicians for the Twenty-first Century: Report to the Project Panel on the General Professional Education of the Physician and College Preparation for Medicine (1984). Studies include Mohan L. Garg and Warren M. Kleinberg, Clinical Training and Health Care Costs: A Basic Curriculum for Medical Education (1985); and Marjorie Price Wilson and Curtis P. McLaughlin, Leadership and Management in Academic Medicine (1984).For new developments in medical education, see the periodicals The Journal of Medical Education (monthly), Medical Education (bimonthly), and WHO Chronicle (bimonthly). Opportunities for continuing medical education appear semiannually in JAMA: The Journal of the American Medical Association (weekly).

    Nobel Prize winners  

    Zinkernagel, Rolf M

    born Jan. 6, 1944, Basel, Switz.

     

    Swiss immunologist and pathologist who, along with Peter C. Doherty of Australia, received the Nobel Prize for Physiology or Medicine in 1996 for their discovery of how the immune system distinguishes virus-infected cells from normal cells.

    Zinkernagel received his M.D. from the University of Basel in 1970 and his Ph.D. from the Australian National University, Canberra, in 1975. He joined the John Curtin School of Medical Research in Canberra in 1973 as a research fellow and soon began collaborating with Doherty on a study of the role the immune system plays in protecting mice against infection by the lymphocytic choriomeningitis virus, which can cause meningitis. Their research centred on the white blood cells known as cytotoxic T lymphocytes (or cytotoxic T cells), which act to destroy invading viruses and virus-infected cells.

    In their experiments, Zinkernagel and Doherty found that T cells from an infected mouse would destroy virus-infected cells from another mouse only if both mice belonged to a genetically identical strain. The T cells would ignore virus-infected cells taken from a different strain of laboratory mice. Further research showed that in order to kill infected cells, T cells must recognize two major signals on the surface of an infected cell: those of the infecting virus and certain “self” molecules called major histocompatibility complex (MHC) antigens, which tell the immune system that a particular cell belongs to one's own body. In the experiment, the T cells from one mouse strain could not recognize MHC antigens from another on the infected cells, so no immune response occurred. The discovery that T cells must simultaneously recognize both self and foreign molecules on a cell in order to react against it formed the basis for a new understanding of the general mechanisms of cellular immunity.

    After leaving the Curtin School in 1975, Zinkernagel served as an associate professor (1979–88) and full professor (1988–92) at the University of Zürich and became head of the university's Institute of Experimental Immunology in 1992. In 1995 Zinkernagel received an Albert Lasker Basic Medical Research Award for his studies on T-cell recognition of self and foreign molecules. His interests in developing drugs that modulate immune function led to his election to the board of directors of Novartis AG in 1999 and to the board of directors of Cytos Biotechnology AG from 2000 to 2003.

     

    Nobel Prize winners  

    Zsigmondy, Richard



    born April 1, 1865, Vienna, Austrian Empire
     
    died Sept. 23, 1929, Göttingen, Ger.

    Austrian chemist who received the Nobel Prize for Chemistry in 1925 for research on colloids, which consist of submicroscopic particles dispersed throughout another substance. He invented the ultramicroscope in the pursuit of his research.

    After receiving his doctorate from the University of Munich in 1889, Zsigmondy worked in research at Berlin and then joined the faculty of the University of Graz, Austria. From 1908 to 1929 he was director of the Institute for Inorganic Chemistry at the University of Göttingen.

    While employed in a glassworks (1897) Zsigmondy directed his attention to colloidal gold present in ruby glass, and he discovered a water suspension of gold. He theorized that much could be learned about the colloidal state of matter from studying the manner in which the particles scatter light. To facilitate such study, he and Heinrich Siedentopf developed the ultramicroscope (1903), and Zsigmondy used it to investigate various aspects of colloids, including Brownian motion. His work proved particularly helpful in biochemistry and bacteriology.