Saturday, January 28, 2012

About vitamin D

One of the most important things you can do for your health is to learn about the benefits of proper sunlight exposure and vitamin D. This section provides information on how to effectively incorporate sun exposure and/or vitamin D supplementation into your daily health regimen.
READ ARTICLE

Friday, January 27, 2012

The Role of Flaxseed Oil in Preventing Breast Cancer

NATURES ANSWER TO BREAST CANCER
The National Cancer Institute has begun to focus on diet as a preventative measure to avoid certain cancers. It has been found that certain fruits, vegetables, and grains possess potent cancer fighting and protective compounds known as phytonutrients (naturally occurring and nontoxic plant chemicals). Perhaps the most powerful phytonutrient plant food discovered to date is flaxseed. This ancient grain contains a phytonutrIent called fignan that once ingested is converted to compounds that compete with estrogen for binding sites on estrogen receptors. The net result is a flushing of excess estrogen (linked to high incidence of colon and breast cancer) from the body. Ironically, lignans may someday replace the need for estrogen therapy in postmenopausal women due to the fact that they resemble estrogen on the receptor site, but without the potential risk!
READ ARTICLE

Flaxseeds

The warm, earthy and subtly nutty flavor of flaxseeds combined with an abundance of omega-3 fatty acids makes them an increasingly popular addition to the diets of many a health conscious consumer. Whole and ground flaxseeds, as well as flaxseed oil, are available throughout the year.

Flaxseeds are slightly larger than sesame seeds and have a hard shell that is smooth and shiny. Their color ranges from deep amber to reddish brown depending upon whether the flax is of the golden or brown variety. While whole flaxseeds feature a soft crunch, the nutrients in ground seeds are more easily absorbed.
READ ARTICLE

Flaxseed and flaxseed oil - Mayo Clinic

Flaxseed and its derivative flaxseed oil/linseed oil are rich sources of the essential fatty acid alpha-linolenic acid, which is a biologic precursor to omega-3 fatty acids such as eicosapentaenoic acid. Although omega-3 fatty acids have been associated with improved cardiovascular outcomes, evidence from human trials is mixed regarding the efficacy of flaxseed products for coronary artery disease or hyperlipidemia.
READ ARTICLE

Flaxseed

Flaxseed is the seed from the plant Linum usitatissimum. The seed or the seed oil is used to make medicine. The information on this page concerns medicine made from the SEED only. There is a separate listing for flaxseed OIL.

People use flaxseed for many conditions related to the gastrointestinal (GI) tract, including ongoing constipation, colon damage due to overuse of laxatives, diarrhea, inflammation of the lining of the large intestine (diverticulitis), irritable bowel syndrome (IBS) or irritable colon, sores in the lining of the large intestine (ulcerative colitis), inflammation of the lining of the stomach (gastritis), and inflammation of the small intestine (enteritis).

Flaxseed is also used for disorders of the heart and blood vessels, including high cholesterol, “hardening of the arteries” (atherosclerosis), high blood pressure (hypertension), and coronary artery disease.

Flaxseed is also used for acne, attention deficit-hyperactivity disorder (ADHD), kidney problems in people with a disease called systemic lupus erythematosus (SLE), symptoms of menopause, and breast pain. It is also used for diabetes, obesity and weight loss, HIV/AIDS, depression, bladder infections, malaria, and rheumatoid arthritis.

Some people use flaxseed to lower their risk of getting weak bones (osteoporosis) and to protect against breast cancer, lung cancer, colon cancer, and prostate cancer.
READ ARTICLE

Thursday, January 26, 2012

Black cohosh (Cimicifuga racemosa [L.] Nutt.)

Black cohosh is popular as an alternative to hormonal therapy in the treatment of menopausal (climacteric) symptoms such as hot flashes, mood disturbances, diaphoresis, palpitations, and vaginal dryness. Several studies have reported black cohosh to improve menopausal symptoms for up to six months, although the current evidence is mixed.

The mechanism of action of black cohosh remains unclear and the effects on estrogen receptors or hormonal levels (if any) are not definitively known. Recent publications suggest that there may be no direct effects on estrogen receptors, although this is an area of active controversy. Safety and efficacy beyond six months have not been proven, although recent reports suggest safety of short-term use, including in women experiencing menopausal symptoms for whom estrogen replacement therapy is contraindicated. Nonetheless, caution is advisable until better-quality safety data are available. Use of black cohosh in high-risk populations (such as in women with a history of breast cancer) should be under the supervision of a licensed healthcare professional.
READ ARTICLE

Black Cohosh - Web MD

For centuries, the roots of the North American black cohosh plant have been used for various ailments. Black cohosh is now a popular remedy for the symptoms of menopause. This has been especially true since the risks of a standard treatment for menopause -- hormone therapy -- were publicized in 2002.

Why do people take black cohosh?

Black cohosh is most often used to control the symptoms of menopause, such as:

  • Headaches
  • Hot flashes
  • Mood changes
  • Sleep problems
  • Heart palpitations
  • Night sweats
  • Vaginal dryness

Some studies have found evidence that black cohosh does help with these symptoms. However, many experts consider the evidence unclear and say that we need more research.

Other uses of black cohosh have less scientific support. Women sometimes take it to regulate periods, ease PMS symptoms, and induce labor. Black cohosh has also been used to relieve arthritis pain and help lower blood pressure. There are some preliminary laboratory studies showing that black cohosh may be useful in preventing or treating prostate cancer. Definitive research has not verified black cohosh's effectiveness for these uses.

READ ARTICLE

Black Cohosh: Dietary Supplement Fact Sheet

This fact sheet provides an overview of the use of black cohosh for menopausal symptoms.

Key points

READ ARTICLE

Wednesday, January 18, 2012

Osteoporosis

Osteoporosis, a chronic progressive disease of multifactorial etiology (see Etiology), is the most common metabolic bone disease in the United States. It has been most frequently recognized in elderly white women, although it does occur in both sexes, all races, and all age groups.

This disease is considered a "silent thief" that generally does not become clinically apparent until a fracture occurs (see Clinical Presentation). Screening at-risk populations is, therefore, essential (see Workup).

Osteoporosis can affect almost the entire skeleton. It is a systemic skeletal disease characterized by low bone mass and microarchitectural deterioration of bone tissue, with a consequent increase in bone fragility.[1] The disease often does not become clinically apparent until a fracture occurs.

Osteoporosis represents an increasingly serious problem in the United States and around the world. Many individuals, male and female, experience pain, disability, and diminished quality of life as a result of having this condition. The economic burden the disease imposes is already considerable and will only grow as the population ages.[2]

Despite the adverse effects of osteoporosis, it is a condition that is often overlooked and undertreated, in large part because it is so often clinically silent before manifesting in the form of fracture. For example, a Gallup survey performed by the National Osteoporosis Foundation revealed that 75% of all women aged 45-75 years have never discussed osteoporosis with their physicians. Failure to identify at-risk patients, to educate them, and to implement preventive measures may lead to tragic consequences.

Medical care includes calcium, vitamin D, and antiosteoporotic medication such as bisphosphonates and parathyroid hormone. Antiresorptive agents currently available for osteoporosis treatment include bisphosphonates, the selective estrogen-receptor modulator (SERM) raloxifene, calcitonin, and denosumab. One anabolic agent, teriparatide (see Medication), is available as well. Surgical care includes vertebroplasty and kyphoplasty. (See Treatment and Management.)

Osteoporosis is a preventable disease that can result in devastating physical, psychosocial, and economic consequences. Prevention and recognition of the secondary causes of osteoporosis are first-line measures to lessen the impact of the disease (see the images below).

Osteoporosis of the spine. Observe the considerablOsteoporosis of the spine. Observe the considerable reduction in overall vertebral bone density and note the lateral wedge fracture of L2.Osteoporosis of the spine. Note the lateral wedge Osteoporosis of the spine. Note the lateral wedge fracture in L3 and the central burst fracture in L5. The patient had suffered a recent fall.

READ ARTICLE

Milking Your Bones

Bone Density: The Big Dairy Fallacy

While the National Osteoporosis Foundation tells us we need more calcium to build stronger bones, especially from cow's milk, the scientific evidence does not support this.

The Chinese University of Hong Kong performed successive studies in the 1990s analyzing milk and calcium intake as they relate to the growth of children. This was an ideal place and time for such an investigation, because cow's milk was just making its way into popular use in that country, and the traditional diet was not high in calcium.

The first study looked at children from birth to five years of age. With 90 percent of the study children drinking milk, their average calcium intake was 550 mg. At age five, the current level of calcium intake for each child did not correlate with their bone mineral levels. The calcium intake during the second year of life proved to be the strongest predictor of bone "hardness" at age five (breast milk would provide the same or more advantage as cow's milk or other sources).1

In the second study, seven-year-old children were given calcium supplements to bring their daily calcium intakes up to 800 mg. Over 18 months no increases were seen in height or arm or leg bone density beyond those not supplemented, although some improvement in spinal bone density was seen.2 At ages 12 to 13, calcium intake did not correlate to bone mineral content, except that the girls who consumed the higher levels of calcium had lower bone density in their arms.3

In all these studies, higher weight and greater physical activity were strongly associated with higher bone mineral content. By adolescence, neither calcium intake nor physical activity had any more influence on bone mineral improvement.4 A 14-year British study found that in young adults, their early teenage body weight and physical activity levels determined their bone mineral densities, with a slight effect from calcium intake.5

In a widely quoted study by a British hospital, researchers provided an extra glass of cow's milk to adolescent girls' diets, comparing their growth to those who drank an average of just over one half cup per day. Total daily calcium intake at the beginning of the study was 750 mg, and the extra milk group increased to 1,100 mg. The researchers reported about 10 percent greater bone growth rate for those with the extra milk.6 These children also gained a little more weight, but not height. The reported extra bone density could not be validated by any changes in the blood enzyme markers that typically reflect bone growth and bone re-absorption. Insulin-like growth factor was also found to be higher in the extra milk group.

Several experts wrote replies to this study, which had been paid for by the U.K. dairy industry. One expert, doing his own math from the study, found the milk group to have a slightly lower average total bone mineral content at the end of the study.7

A U.S. study on child twins found some increases in bone density in the arms and spine with calcium supplements (not milk).8 Once puberty began, however, calcium provided no benefits.

While dairy promotions also include praise for the protein of milk as well, this protein may be more of a problem in osteoporosis than calcium could be a solution. The animal proteins of meat and dairy products cause calcium loss.9The level of calcium needed in the diet depends greatly on the animal protein intake.10 For many of the high animal protein diets of Americans, it may not be possible to consume enough calcium in the diet to compensate for the amount lost to these high-acid proteins.11 For this reason, Americans have among the highest osteoporosis rates in the world, while their dairy intake is also among the highest. Doubled animal protein causes 50 percent more calcium loss. Yet, when a high protein intake is soy-based, a positive calcium balance can be maintained with only 450 mg of calcium per day.12

The chief concern over bone density is that it gradually reduces with maturity. At a certain point of bone loss, the term osteoporosis is used. This is a level where low-trauma bone fractures become more frequent. Spinal fractures are a problem, as are hip and arm fractures, which are easier to measure for research purposes. The highest level of bone density attained in young adulthood correlates with bone density maintained in later decades. What is not entirely understood is how much impact dietary factors have on these events. Some studies suggest that childhood calcium intake before puberty may have some slight positive effect. At the same time that diabetes, cancer, and other concerns may limit the amount of dairy that should be given to a child, it also appears that bone protection is no reason to promote dairy consumption beyond childhood.

In 1986 a Harvard researcher produced a graph that demonstrated a nearly direct relationship between calcium intake and hip fractures - the more calcium, the more fractures.13,14

A 1987 study of 106 adult women suggested that calcium intake between 500 and 1,400 mg per day led to no difference in bone mineral densities.15

A larger Italian study found that in women who consumed between 440 and 1,025 mg of calcium per day, a slightly increased number of hip fractures occurred with higher milk intakes.16


READ ARTICLE

Rethinking osteoporosis - Causes of osteoporosis

It is often suggested that the major causes of osteoporosis are low calcium intake and lower estrogen levels at menopause. Tracking a cross-cultural perspective, however, we find that this is not always true. For example, people in many countries have lower calcium intakes than in the US, yet osteoporosis is less prevalent in these cultures. As an example, the Japanese calcium intake has only recently risen to 540 mg per day, much less than the US RDA for post-menopausal women of 1,200 mg per day. And yet the US hip fracture rate is twice that of Japan! In fact, research has shown that countries with the highest calcium intake have the highest hip fracture rates. Furthermore, I have identified at least 19 nutrients in addition to calcium that are essential to bone health.

The same holds true for estrogen. Women all over the world experience a lowering of estrogen at menopause, but not all women experience osteoporosis. Attributing the causes of osteoporosis to the natural decrease in estrogen at menopause is too simplistic. The fact is that Mayan Indian women, Bantu women of Africa, and Japanese women all have lower estrogen levels than women of various ethnic groups in the United States, but they all experience many fewer fractures than American women. Also, the few years before menopause are a time of very rapid bone loss for most women, yet a woman’s estrogen level at this time is generally higher than during her reproductive years.

There are ways to detect low bone density and ongoing bone loss. It is not easy, however, to predict who will actually suffer an osteoporotic fracture. Bone density tests attempt to measure bone mass in various areas of the body, and markers of bone resorption can tell if your body is likely breaking down excessive bone at any given time. These tests can detect low bone density and high bone breakdown before a fracture occurs, and thus help identify your chances of a future fracture. They cannot, however, predict who will fracture. For example, over half of all women who experience an osteoporotic fracture do not have an “osteoporotic” bone density. They have either moderately low bone density, known as osteopenia, or even normal bone density. Given this, everyone, even those with good bone density, would do well to maintain a strong bone-building program.

A more realistic picture of the causes of osteoporosis portrays a variety of bone-depleting factors, each building one upon the other. Each bone-depleting factor adds to the others until the total load is more than our bone can support, so to speak. The camel image shown here depicts many of the factors that contribute to poor health and osteoporotic fractures. The array of bone-depleting factors is more wide-ranging and more important than generally recognized.

Total load model of bone-depleting factors ©2009.
Please click here for a printable version.

The camel drawing shows that there are many complex factors contributing to osteoporosis — and even this overloaded camel doesn’t contain all of them! It’s virtually impossible, in fact, for any single person to have all of these factors going on at once, although certainly many of them are interconnected. But it shows us that the causes of osteoporosis can vary greatly from person to person. So it makes sense that the best ways to prevent and treat osteoporosis will also vary from person to person. Every case must be carefully analyzed to develop the best individualized osteoporosis program. Yet for everyone, dietary improvements — specifically, changing to an Alkaline for Life® diet — nutrition supplement therapy, exercise, and lifestyle modifications are powerful bone-strengthening tools.

READ ARTICLE

Monday, January 16, 2012

Creatine for Mind, Body, and Longer Life

Creatine is quickly becoming one of my favorite supplements, and not just because of the way it helps me in the gym. It’s been shown that creatine can also be used as a nootropic and as a way to stave off potential neurodegeneration. Because earlier reports of damage to the kidneys and liver by creatine supplementation have now beenscientifically refuted, creatine is becoming increasingly accepted as a powerful and multi-faceted daily supplement.

So what is it? Creatine a nitrogenous organic acid that occurs naturally in vertebrates and helps to supply energy to all cells in the body—primarily muscle. It’s also been shown to assist in the growth of muscle fibres. Creatine achieves this by increasing the formation of Adenosine triphosphate (ATP). It is an osmotically active substance, so it pulls water into muscle cells. Creatine is naturally produced in the human body from amino acids primarily in the kidney and liver and is transported in the blood for use by muscles.

Back in the early 1990s it became common for bodybuilders, wrestlers, sprinters and other athletes to take creatine as word got out that it contributed to increased muscle mass and energy. Athletes began to consume two to three times the amount that could be obtained from a high protein diet. Creatine, which is typically bought in pills or flavored powders and mixed with liquid, increases the body’s ability to produce energy rapidly. With more energy, athletes can train harder and more often, producing better results.

In fact, research shows that creatine is most effective in high-intensity training and explosive activities. This includes weight training and sports that require short bursts of effort, such as sprinting, football, and baseball. As a CrossFitter and an occasional user of creatine, I can certainly vouch for these effects. I believe that creatine is responsible for adding as much as five to twenty pounds to my lifts (depending on the kind of lift) along with an added boost of muscular endurance—two very desirable qualities for CrossFit athletes.

Recently I have switched from being an occasional user of creatine (3000 mg per day, cycling monthly) to a daily low dosage user (750 to 1500 mg per day every day) while on a high protein diet. I’ve done this for cost reasons while still hoping to take advantage of its benefits, which aren’t just limited to the physical realm.

Indeed, creatine has been shown to have a significant impact on brain health. It’s been shown to boost brain performance, including positive impacts on working intelligence and memory—both of which require improved cognitive processing speed. Back in 2003, a study showed that people who took creatine for six weeks scored better on tests measuring intelligence and memory than those who did not take it. And interestingly, some of the most significant cognitive benefits are experienced byvegetarians and vegans, probably on account of protein deficiencies (which has an impact on the body’s ability produce creatine naturally).
READ ARTICLE

What Doctors Know — and We Can Learn


Physicians are more likely to sign advance directives and avoid rescue measures at the end of their lives

Last month, an essay posted by retired physician Ken Murray called “How Doctors Die” got a huge amount of attention, some negative but mostly positive. Murray tells the story of an orthopedic surgeon who, after being diagnosed with pancreatic cancer, chose not to undergo treatment. The surgeon died some months later at home, never having set foot inside a hospital again.

Critics said that the essay was a biased opinion of how one should die, not an actual analysis of how doctors actually do die. And indeed, much of Murray’s essay was anecdotal. Murray writes that his physician friends wear medallions with DNR, or Do Not Resuscitate, orders. They instruct their colleagues to not take any heroic measures and to keep them out of the ICU at the end of life. He’s even seen a colleague with a DNR tattoo, something I’ve been threatening to get for a long time.

(MORE: The Real Issues of End-of-Life Care)

And yet, there is good evidence that physicians have thought out end-of-life issues more thoroughly than laypeople and are more likely to decline medical intervention. For example, they sign advance directives far more often than the rest of us do. Less than half of severely or terminally ill patients have an advance directive in their medical records. These are legal documents that indicate the kind of medical care we prefer at the end of life and where we would like to spend our last few days or weeks. Contrast that to a study published a few years back that found 64% of doctors surveyed had signed such documents. Those who had were nearly three and a half times more likely to refuse rescue care, like CPR, compared with doctors who had not signed an advance directive.

Why would doctors be so anxious to avoid the very procedures they deliver to their patients every day? For one thing, they know firsthand that these procedures are most often futile when performed on a frail, elderly, chronically ill person. Only about 8% of people who go into cardiac arrest outside of the hospital are revived by CPR. Even when your heart stops in the hospital, you have only a 19% chance of surviving. That’s a far cry from the way these procedures are portrayed on TV, where practically everybody survives having his heart shocked and undergoing CPR.

Doctors also know that undergoing heroic measures is a lousy way to die. They’ve seen what it’s like for an elderly patient to end up in the ICU, hooked up to machines, often semiparalyzed, in pain, lying on what philosopher Sidney Hook called “mattress graves” during his own terminal illness. At a recent meeting I attended, one emergency physician tearfully admitted she didn’t think she could stand to hear the sound of ribs breaking as she perform CPR on yet another elderly patient who almost certainly would not survive.

Read more: http://ideas.time.com/2012/01/16/what-doctors-know-and-we-can-learn-about-dying/#ixzz1jhOyaxXm

Saturday, January 7, 2012

PSA screening doesn't prevent cancer deaths: study

(Reuters Health) - Annual screening for prostate cancer doesn't cut men's chances of dying from the disease, according to the latest results of a large screening trial.

Comparing men who were screened each year with so-called PSA tests, for prostate specific antigen, or rectal exams to those who received their usual care, researchers found that more men in the screening group had been diagnosed with prostate cancer after 13 years -- but there was no difference in how many had died from it.

The results support a previous report by the same researchers that found no difference in deaths seven to 10 years after the screening program started.

They are also consistent with recent draft guidelines from the U.S. Preventive Services Task Force recommending that average-risk men not undergo regular PSA screening, according to a researcher who worked on the study.

"Men, if they're considering screening, should be aware that there's a possibility that there's little or no benefit (and) that there certainly are harms to PSA screening," said study co-author Philip Prorok, from the National Cancer Institute in Bethesda, Maryland.

Those harms include catching and treating small cancers that never would have been detected or caused men any problems, Prorok told Reuters Health.
READ ARTICLE