viernes, 10 de junio de 2011

NUEVA TENDENCIA- LA VENTA DE OVULOS para INVESTIGACION CON CELULAS MADRE -

Pagar a una mujer por sus óvulos para usarlos en la investigación con células madre ha sido un no-no en bioética  por años. Pero, Estado de Nueva York decidió permitirlo convirtiéndose en el primer estado donde es legal usar el dinero público para eso. La decisión, permite el pago de hasta $ 10.000, lo que probablemente de inicio a una fuerte avalancha de donantes y por lo tanto el inicio de una gran competencia entre las organizaciones involucradas en este tipo  investigaciones, creando un mercado que no se tiene idea de su dimensión. Por eso, muchos expertos en bioética, estan preocupados, creen  que los incentivos financieros podrían explotar a las mujeres y poner su salud en peligro.

Las cuestiones éticas que rodean las donaciones  de óvulos para este propósito se debe a que el proceso no está exento de riesgos. Se requiere una serie de inyecciones para la estimulación hormonal, así como un procedimiento invasivo para recuperar los óvulos y los efectos a la salud a largo plazo son desconocidos. 

Los defensores de esta decision dicen que el pago por servicios similares no es nuevo, que siempre se ha pagado a la gente por participar en investigaciones que  no tienen beneficios para ellos y que son  riesgosas.


Yo, personalmente no  entiendo cual es diferencia entre estas donaciones  y  las donaciones que actualmente se hacen de fertilidad, siendo la infertilidad una enfermedad, por lo que las mujeres que donan sus óvulos están ayudando a otras mujeres a superar su enfermedad, tal y como se pretende con este tipo de investigacion, ayudar a encontrar tratamientos para otras enfermedades. Si puedo ver diferencia en la calidad de los óvulos que se utilizan en ambos procesos.

Por mi parte, tengo algunas preguntas sin contestaciones. Sera este la principal fuente de ingresos para una cantidad considerable de mujeres en el futuro? Que segmento de mujeres se sentirán muy atraídas por esta oferta? Afectara esto la participacion de la mujer en el mercado de trabajo? Como impactara esto al mercado de trabajo si se incrementa la demanda? Sera este un nuevo mercado mundial? Que cantidad de ovulos puede vender una mujer si ariesgar su salud?.

jueves, 9 de junio de 2011

New Research Provides Breakthrough in Understanding Common Cancer


 

ScienceDaily (June 8, 2011) — Researchers from the University of Sheffield have discovered valuable insight into how people develop B-cell lymphoma, one of the most common cancers.

Lymphoma is a type of cancer that affects the blood, originating in the lymph glands. B-cells are the immune cells in the human body that are responsible for producing antibodies to fight infections and provide long-term immunity. B-cell lymphomas include both Hodgkin's lymphomas and most non-Hodgkin's lymphomas.The team, from the University's Institute for Cancer Studies and funded by Leukaemia and Lymphoma Research, Biotechnology and Biological Sciences Research Council (BBSRC) and Yorkshire Cancer Research, found that a mechanism different to that previously thought to be the cause of lymphoma may be responsible for the development of the disease.
Prior to this research, the main theory to explain the origins of lymphoma was the malfunction of a mechanism (somatic hypermutation) used by B-cells to modify the genes coding for antibodies. This mechanism is required to produce highly specific antibodies, but it also accidentally alters other genes, leading to lymphoma.
However, the team from the University knew that this theory only accounted for affecting a handful of genes, and the model could only explain certain types of lymphoma.
Led by Dr Thierry Nouspikel, the researchers discovered another mechanism, which potentially affects many more genes and can account for a wider palette of lymphomas. The research found that B-cells actually do not repair the bulk of their DNA and only take care of the few genes they are using. When the B-cells are inert in the blood flow, this is not a problem. However, when they receive a stimulation (e.g. an infection) they start to proliferate and then produce antibodies.
To proliferate they must replicate their DNA, and replication of damaged DNA results in the introduction of mutations, the accumulation of which can lead to lymphoma. Dr Nouspikel's team have designed a novel method to specifically detect such mutations, and have proved that they do occur in genes that have been implicated in lymphoma.
The researchers demonstrated that B-cells are deficient in one of the main DNA repair pathways, known as Nucleotide Excision Repair. This pathway repairs a lot of different DNA lesions, including UV-induced damage and chemical adducts (e.g. from air pollution and cigarette smoke). Their model therefore explains why strong UV exposure (e.g. unprotected sun bathing) is the number one environmental risk factor for lymphoma and also supports the evidence that exposure to air pollution and smoking are also risk factors.
Dr Nouspikel said: "Lymphoma is one of the ten most frequent cancers in adults in the UK, and the third among children. If we want to come up with efficient strategies for prevention and therapy, it is crucial to understand what causes it. The novel mechanism we have discovered potentially accounts for the development of many different types of lymphoma. It may also explain why strong exposure to sunlight is the main environmental risk factor for this cancer."
The research is due for publication in Blood on 9 June 2011 and will also be published in Cell Cycle on 15 July 2011.

miércoles, 8 de junio de 2011

The College Education Crisis


In most industries, competition drives quality and lowers prices.  Unfortunately, higher education suffers from a serious lack of genuine economic competition.  In fact, private colleges are essentially a government-subsidized oligopoly resembling electric utilities, while public universities are a lot like any other government bureaucracy.
To support this conclusion, consider that over the past two decades tuition has risen 3.5 times the average cost of living and that the quantity of skilled college graduates falls short of the demand in all too many fields.  So, at a time of rapidly rising productivity, why is the effective productivity of colleges and universities actually declining?
The reasons for rising costs include the need to invest in modernizing facilities, providing enhanced security, and providing specialized equipment for technical training.  However, the biggest single factor behind tuition inflation has been a lack of legitimate market competition.  And the largest barrier to competition is the Federal Stafford Loan program, launched in 1992.1Unlike most other forms of financial aid, these government-backed student loans have no restrictions on parental income.  To colleges, this was simply a license to print money; and this new source of billions in additional aid dollars was an open invitation to increase tuition.
§§§§§§§§§§
Promoted as a low-cost program, Stafford Loans led to massive borrowing.  Colleges got lots of additional revenue and students were left with the debt.  The government had worked a “sleight of hand” in the education realm, shifting money from the future incomes of graduates to boost the incomes of college faculty and administrators today.  Consequently, many students found themselves in financial bondage and society was little, if any, better off.
§§§§§§§§§§
So, what did these universities spend this windfall on?  For the most part, administrators invested in what they perceived to be “quality enhancements.”  Unfortunately, most of them seem to have a serious misunderstanding of what constitutes the school’s product.  Universities see themselves as the product, judging their worth by the size and amenities of their campuses, the caliber of their faculties, and the range of services they offer.  However, while these things may attract students and alumni donations, the real product is the student.  The real measure of a school’s quality is the quality of the graduates the school produces.
That’s the rub.  For all the additional revenue colleges now generate, the outcomes in terms of student capabilities are not all that great.  In fact, a recent study based on a large sample of students attending 29 different American four-year colleges reported that an astounding 36 percent of students showed no gains in learning in their four years of college.
Another disturbing statistic is that currently, 40 percent of those who begin a four-year programdo not graduate.2
§§§§§§§§§§
Obviously, the fault does not lie entirely with the colleges.  The quality of incoming students has also been slipping; that’s an indictment of the escalating sums we’ve spent on secondary schools.  According to U.S. government statistics reported in The Economist,3 high school seniors’ proficiency scores for science, math, reading, and writing all dropped from 1992 to 2005.  Furthermore, just one-third of American students graduated from high school with the ability to read proficiently, and just one-fourth could write a basic paragraph.  Sadly, only about 15 percent of high school graduates possess the skills necessary to do well in college.  Despite increased spending over the past 20 years, this number has not changed much.
This problem is only expected to intensify because, as we reported this month in our analysis of the trend, The Challenge of “Media-Addicted” Consumers, Employees, and Citizens, young people are becoming too busy with digital media — including cell phones, text messages, the Internet, e-mail, and video game systems — to read old-fashioned printed books or newspapers, even on their iPads.
Ironically, the generation that is most comfortable with digital technology, which gives them unprecedented access to all of the world’s knowledge, actually knows less than the previous generations that lacked this advantage.  Instead of using the Internet to improve their understanding of the world, young people are using it to stay in touch with their friends.
Reacting to this slide in college-readiness, most universities have had to “dumb down” their curricula in order to maintain their levels of enrollment.  As a result, many students easily “sail through” to a degree taking the easiest classes available.  That is driving yet another alarming statistic:  The average full-time U.S. college student now spends just 3.3 hours per weekday on educational activities.  That is surpassed by 3.6 hours of leisure and sports.4
Not surprisingly, college students typically aren’t producing term papers filled with logic, rhetorical flourishes, or even proper grammar.  Instead, they’re communicating with a minimum of effort, a minimum of words, and a minimum of impact.
§§§§§§§§§§
So, in effect, it’s a double whammy:  College is costing more, while students are getting less.  As a result, they are graduating in debt, and they are unprepared to get a job to earn the income they need to pay off their debt.
§§§§§§§§§§
To address the issue of rising tuition, students are finding ways to cut costs.5 For example, 30 percent are saving money by taking on a larger course load in order to graduate sooner.  Another smart approach is taken by the 43 percent who have chosen to live at home and commute to campus.  This is certainly a great way to limit expenses, but it also limits one’s choice of schools.
Unfortunately, these measures aren’t real solutions to the growing crisis; they are merely ways to cope with the problem.
§§§§§§§§§§
A major step in reversing the escalation of tuition would be to get the government out of the game.  If every student paid his or her own tuition, market forces would quickly compel colleges to optimize their costs.  Under the current system, university administrators know the government will continue to provide more funds through student loans, which eliminates their motivation for controlling costs.  Under these new rules, however, some colleges would close, some would merge, and most would streamline their operations, while seeking new business models.  The result would be college educations redesigned to fit economic realities.
Yet, just bringing costs under control won’t solve the bigger problem of the graduates who are not well-educated.  For that, there needs to be a major shift to a solution whose time has come.  Fortunately, it’s a solution that will go a long way toward solving the cost problem as well.
What is that solution?  Replace today’s 19th century system with a 21st century system.  Despite protests to the contrary, the traditional college model that requires a campus with classrooms, teachers, dorms, and libraries is obsolete.  In fact, it is unsustainable and must be replaced with a low-cost alternative.
The alternative is “distance education,” also known as “online learning.”  As Harvard professor Clayton Christensen reminds us, such disruptive solutions almost always enter at the bottom of the market and steadily get better until they own all but a tiny segment at the top of the market.6
According to a study by Shelia Tucker of East Carolina University, “Distance education reaches a broader student audience, better addresses student needs, saves money, and, more importantly, uses the principles of modern learning pedagogy.”7 This type of learning is already becoming widespread with sites like eduFire.com, which provides a marketplace for both teachers and students to find one another and connect with the most effective learning tools — all of it online, so geography doesn’t matter.  Moreover, the educational experience can be infinitely customized, which completely upsets the traditional model that offers a one-size-fits-all curriculum.
The old stigma of online learning is quickly changing, especially among a younger generation that is fully accustomed to receiving information electronically.  Studies reveal that not only is online learning good enough compared to traditional settings, in some measures, it’s even better.  In Tucker’s study, for example, students who learned online scored higher on post-test scores and final exam scores.
Also, a 12-year study by the U.S. Department of Education found that people learn better online than in a traditional classroom.8 In part, this is because students learning online are not faced with an authority figure but with their own peers in a collaborative setting.  They’re already used to that environment from social networking sites.
From both an economic and educational point of view, it would be much smarter to pay the world’s greatest expert $1 million per year to create and update the world’s greatest course on a topic that tens of thousands of students could access electronically, versus paying 50 mediocre professors $100,000 each per year to teach 100 students that same course in a traditional setting.
§§§§§§§§§§
Research universities obviously need top Ph.D. researchers and graduate students, but those people are largely funded by research grants from the government and corporations.  Rank-and-file people who write low-impact research papers and teach lame classes need to go away, along with their overhead; and that’s exactly what will happen when the distance learning model becomes a reality.
§§§§§§§§§§
Where traditional institutions fail to fill the void, entrepreneurs will enter.  According to an article in VentureBeat,9 the high cost of a college education at traditional universities is already driving opportunities for online learning endeavors that promise to revolutionize the way we learn.
The technology is available, the need is great, and the time is now.  A few elite colleges can certainly afford to keep the old model, as long as rich benefactors pick up the tab, but the rest should be forced to transition into the 21st century.
Given this trend, we offer the following three forecasts:
First, a major transition to online learning will occur over the next two decades. It will occur naturally, as both supply and demand grow together.  Students who are simply priced out of a traditional college education will increasingly flock to online offerings.  However, as the quality continues to improve, we foresee an interesting irony.  Many of those who will be first to choose online leaning will do so because their high school grades and ACT/SAT scores were too low to receive merit-based college scholarships.  But, many of them will get a better education and be more prepared for the workforce than the “brighter” students who took frivolous classes such as “The Impact of the Beatles” and spent three hours a day playing video games in their dorms.  In time, as the success and cost savings of online learning become more widely known, it will become the preferred choice for students across all academic levels.
Second, in the short term, a “hybrid solution” will provide the winning value proposition for traditional colleges. Some colleges already offer a combination of online and live programs.  Those schools will find it easy to trim faculty and facilities without endangering their unique character.  By focusing their remaining faculty on top-notch online and live courses, they will be able to produce better-prepared graduates at a lower cost.
Third, while college faculty members and administrators will ally themselves with bureaucrats and politicians to fight this wave of change, they will ultimately fail. As with every technology-driven market revolution, those with the biggest investment in the status quo will fight change.  However, the superior business model will triumph in the end.  America’s great research universities will continue to attract the best and brightest faculty and students from around the world.  Since their work is supported largely by public and private research grants and contracts, that will continue unabated.  A certain set of courses will continue to use the traditional model, but for the most part, faculty members will embed their unique insights in courses accessible to thousands of students in thousands of locations, rather than dozens of students in a single classroom

martes, 7 de junio de 2011

Predicting Our Own Happiness


Why we’re usually wrong about how we’ll feel in the future.

Will acing an exam truly make you happy? Will the snub of a cute co-worker send you into throes of despair? Maybe not. New research shows that people routinely discount their own personality biases when they envision how happy or sad they will be as a result of changing external circumstances.
Individuals who are naturally pessimistic imagine that they will be far more euphoric as a result of big life events than usually turns out to be the case. Folks who are usually in a great mood underestimate how much happier particular events will make them (which must make for a pleasant surprise later on).
The new study comes from psychological researchers Jordi Quoidbach of the University of Liege, Belgium, and Elizabeth Dunn of the University of British Columbia. To test their hypothesis that both pessimists and optimists tend to incorrectly predict their future happiness, they surveyed a group of college students to determine their base-level personality (from “optimistic” to “neurotic”). The subjects were then asked to imagine how they would feel, on a scale from one to five, if they received a certain grade in a class.
Six weeks later, when grades actually came out, the researchers surveyed the subjects again. They found a wide gap between how the students expected to feel and how they actually felt. But Quoidbach and Dunn did find a close correlation between how the subjects felt earlier and how they felt when they received their grades.
“Results supported our hypothesis that dispositions would shape participants’ actual feelings but would be largely neglected when people made affective forecasts,” they write.
In a second test, participants (Belgian adults) were asked to describe how happy they would be in the event that Barack Obama won the 2008 U.S. presidential election. After the election was called, the researchers again found that the participants’ actual level of happiness reflected how happy they were when they were asked the question, not how happy they expected to be later.
Why are people so bad at predicting their future happiness levels? The problem may be in the brain. Previous studies have shown that the part of the brain responsible for envisioning future states is the same part tasked with remembering situations we’ve already experienced, the episodic memory center. Neurologically, the act of imaging a scenario is a lot like the act of remembering. But we process thoughts and ideas about our own personalities in a different part of the brain, the semantic memory center, which is tasked with learning and analyzing abstract concepts but not remembering specific events.
“For example, an amnesic patient was able to rate his personality in a highly reliable and consistent manner even though he was unable to recollect a single thing he had ever done,” write the researchers. When we envision the future, we use the part of the brain we use to remember the past, not the part that knows our personality the best. This is why our personal-happiness forecasts are so often off the mark.
Quoidbach and Dunn’s research provides further support for Hedonic Adaptation, a 40-year-old theory that says that most people have a baseline level of happiness, whether or not they’re aware of it. So while we may experience blips of joy when we rush out to make a big consumer purchase, or bouts of melancholy when we suffer a setback, eventually we return to a default emotional setting.
Quoidbach and Dunn hope their research will help people take their personality into account when making big decisions or forming expectations. “For example, individuals high in dispositional happiness who are planning their next vacation might not need to waste money and effort finding the perfect location (because they will be happy in the end anyway). By contrast, people with less happy dispositions might be more prone to regret the slightest annoyance, so carefully planning every detail of the trip might be the best strategy for their future well-being,” they write.
In other words, if you want to know how a big event will make you feel in the future, consider how you feel right now and you’ll have your answer.—Patrick Tucker
Source: “Personality Neglect: The Unforeseen Impact of Personal Dispositions on Emotional Life” by Jordi Quoidbach and Elizabeth W. Dunn, Psychological Science(December 2010), www.psychologicalscience.org.

Discovering the Future


By Paul Crabtree
THE FUTURIST May-June 2008
For good reason, H.G. Wells is often considered to be the “father” of futurism. In the September-October 2007 edition of THE FUTURIST, I discussed some of the amazingly accurate predictions of Wells in his nonfiction book Anticipations of the Reactions of Mechanical and Scientific Progress Upon Human Life and Thought, published in 1901. In that seminal volume, Wells attempted to analyze and describe the probable sequence of developments over the course of the twentieth century in a number of pivotal areas, such as transportation, cities, societal relations, government, education, and warfare. He achieved an overall predictive success rate of 60%–80%. Many of these predictions were specific and detailed enough to preclude guesswork and luck as explanations for his success. Though he had a few misses, mostly in terms of predicting social and demographic changes, his accomplishment must nonetheless be judged as an amazing achievement and one that begs for further investigation into how he managed it. His 1902 address to the Royal Society of England provides some startling clues. Among the tools in the Victorian futurist’s arsenal:
• Clockwork universe assumption.
One pillar of Wells’s argument in his speech to the Royal Society, “The Discovery of the Future,” is the nineteenth- century concept that all future events are predetermined by past events. If we knew all that happened in the past, strict cause and effect principles would allow us to predict the future, like a fall of dominoes.
Quantum physics and chaos theory would later invalidate this theory of an absolutely knowable correspondence between past and future events, but in 1901, Wells’s postulate that the past and the future were determined was an orthodox scientific view.
• Inductive thinking.
Wells argues that inductive thinking allows one to build up an understanding of the broad outlines of future history in the same way that archaeologists slowly build up an understanding of the history of previously unknown societies of the past—i.e., by “the comparison and criticism of suggestive facts.” Instead of looking at an array of archaeological facts and relationships and inferring what the past must have been like, Wells suggests using existing or researched information to infer a future state of affairs. Aside from really unknowable large-scale events—an asteroid impact being one of his examples—Wells proposes that such inferences can be reasonably accurate.
• Law of large numbers.
Forecasting the future can make use of statistical probability. While discrete human actions and very detailed events may not be individually predictable because we do not know all about the present or past, on a large scale involving many people and events, a broad trend becomes more apparent and historical aberrations tend to even out. As Wells scholar Patrick Parrinder says of Wells’s use of this idea, “We are concerned with
something like the ‘actuarial principle’ used by insurance companies in determining their premiums. Though individual outcomes are wholly unpredictable, certain sorts of average outcomes in human affairs can be predicted with fair accuracy.”
In arguing for the law of large numbers and broad historical forces, Wells is careful to add that he doesn’t believe in the “Great Man” theory of history. He believes that even individuals in authority react to events more than drive them. Humanity, Wells believes, can influence the details of history but rarely if ever alter major historical trends.
• Science as a predictive discipline.
Scientific procedures, principles, and results provide a basis for prediction, says Wells, pointing out that scientific knowledge is inherently predictive. He argues that science is not science unless it al lows one to successfully predict phenomena— the course and timing of planetary movements, the diagnostic course of disease, the result of chemical combinations, etc. In “Discovery of the Future,” he advocates a general expansion, codification, and joining together of predictions from the various scientific disciplines.
• Future-oriented mind-set.
Wells rejects a logical divide between the past, present, and future as a mistaken product of our personal experience. For Wells, a futurist mind-set means having a mind that “thinks constantly and by preference of things to come, and of present things mainly in relation to the results which must arise from them.” The opposite way of thinking, says Wells, uses the past (rather than calculated future results) as a guide to future action. This mind-set tends to assume that conditions in the past will apply to the future rather than anticipating changes. Change cannot be ignored, cautions Wells, citing both the grand timescale of evolution and the pace of human change in his own time.
• Change drivers.
Not mentioned in his “Discovery of the Future” presentation, but arguably a key assumption in addition to the predictive methodological components enumerated above, is the proposition that scientific and technical progress is both inexorable and a principal driver of changes in the human condition. This, of course, is the central theme of Wells’s Anticipations. Curiously enough, the driving role of science and technology in human affairs is not covered in the “Discovery” talk. Instead, Wells invokes a more encompassing agent of change in the form of a universal, almost teleological need to change and evolve:
We look back through countless millions of years and see the will to live struggling out of the intertidal slime, struggling from shape to shape.… We watch it draw nearer and more akin to us ... its being beats through our brains and arteries ... thunders in our battleships, roars through our cities....
• Disciplined web of implications.
Later in life, Wells tended to be less optimistic than he was in 1902 about the possibility of making successful predictions. Evidently Wells’s work on the screenplay and 1936 film Things to Come had humbled him somewhat. In a subsequent radio broadcast entitled “Fiction about the Future” he details the difficulty he had imparting a realistic feel to the scenes showing the distant future on the motion picture screen. He says the difficulties of depicting the small details of everyday life—hairstyles, clothing, and furniture—defeated the best efforts of his research and imagination. Despite his belief that the film had not been convincing enough, the successful predictions about the future included in Things to Come as well as the book it was based on represent a tour-de-force somewhat comparable to the predictions made in Anticipations.
An approach that Wells used in writing successful future-oriented fiction, which he discussed in the
“Fiction about the Future,” broadcast no doubt applies to his predictions in general. This is to create a web of detailed, plausible implications “by rigorous adherence to the hypothesis” and by excluding “extra fantasy outside the cardinal assumption.”
Wells’s Predictive Building Blocks As a System
At first glance, the elements outlined above look more like discrete considerations than parts of an ordered whole:
• Clockwork universe assumption.
• Inductive thinking.
• Law of large numbers.
• Science as a predictive discipline.
• Future-oriented mind-set.
• Change drivers.
• Disciplined web of implications.
When these principles are expressed in an active voice, relationships can be seen among them. Assume prediction is possible (clockwork universe); gather data and relationships and see what you learn (inductive thinking); identify central tendencies (law of large numbers); rely on logic, math, and science (science as a predictive discipline); identify areas to be evaluated for change impacts (future-oriented mind-set; identify key trends and forces (change drivers); and pursue central tendency causal impacts as far as possible while assuming other things unchanged (disciplined web of implications).
These actions can be reordered and H.G. Wells’s fiction writing expertise no doubt provided him with an ability to create scenarios. Together with the forecasting steps diagrammed above, Wells’s predictive process can be seen to be highly systematic, not merely inspired guesswork. It has, in fact, similarities with the development and use of a modern iterative computer-based forecasting model. But it isn’t mechanical. Rather, in its inventiveness and its reasonability, it speaks to the very best of humanity. ❑
About the Author
Paul Crabtree retired from the U.S. federal government after serving in a number of analytical and managerial positions. He now devotes much of his time to research and writing on technological innovation, forecasting, and related issues.

lunes, 6 de junio de 2011

El tratamiento con células madre puede ofrecer opciones para los huesos rotos que no se curan



(Embargo) Chapel Hill, NC - Los investigadores de la Universidad de Carolina del Norte en Chapel Hill Escuela de Medicina han puesto de manifiesto en un estudio en animales que el trasplante de células madre adultas enriquecido con una hormona de los huesos puede ayudar a la regeneración ósea reparar fracturas que no cicatrizan adecuadamente .

El equipo de estudio UNC dirigido por Anna Spagnoli, MD, profesor asociado de pediatría y de la ingeniería biomédica, demostró que las células madre fabricados con el factor de la hormona del crecimiento regenerativo similar a la insulina (IGF-I) se convierten en células del hueso y también ayudan a las células en la reparación de fracturas de huesos la fractura, lo que acelera la curación. Los nuevos hallazgos se presentan Domingo, 05 de junio 2011 en la Sociedad de Endocrinología de la 93a Reunión Anual en Boston, Massachusetts.

Una deficiencia de curación de la fractura es un problema común que afecta a unas 600.000 personas al año en América del Norte. "Este problema es aún más grave en niños con osteogénesis imperfecta, o enfermedad de los huesos quebradizos, y los adultos mayores con osteoporosis, ya que sus huesos frágiles pueden fácil y repetidamente descanso, y el tratamiento quirúrgico de injerto de hueso a menudo no es satisfactoria o viable", dijo Spagnoli .

Aproximadamente 7,9 millones de fracturas de hueso se producen cada año en los Estados Unidos, con un costo estimado de $ 70 mil millones. De estos, 10 a 20 por ciento no logran curarse.

Las fracturas que no reparar en el plazo normal se llaman fracturas sin unión. Usando un modelo animal de una fractura no sindicalizados, un "golpe de gracia" del ratón que carece de la capacidad de curar los huesos rotos, Spagnoli y sus colegas estudiaron los efectos de trasplante de células madre adultas enriquecido con IGF-I. Ellos tomaron células madre mesenquimales (células madre adultas de la médula ósea) de los ratones y las células de ingeniería para expresar el IGF-I. Luego se trasplantaron las células tratadas en ratones knock-out con una fractura de la tibia, el hueso largo de la pierna.

Usando la tomografía computarizada (TC), los investigadores demostraron que los ratones tratados mejor que la curación de fracturas que los ratones o se deja sin tratar o tratados sólo con células madre. En comparación con los controles de la izquierda para sanar por sí solos o en los receptores de las células madre solamente, los ratones tratados con más hueso cerrar la brecha de fractura, el hueso y la nueva era de tres a cuatro veces más fuerte, de acuerdo con Spagnoli.

"Más emocionante, hemos encontrado que las células madre facultados con IGF-I restauró la formación de hueso nuevo en un ratón que carecen de la capacidad de reparar huesos rotos. Esta es la primera evidencia de que terapia con células madre puede tratar una deficiencia de la reparación de la fractura", que , dijo.

Este éxito en un modelo animal de la fractura no sindicalizados, Spagnoli, dijo, "es un paso crucial hacia el desarrollo de un tratamiento con células madre basado en los pacientes con fracturas no los sindicatos."

"Tenemos la visión de un uso clínico de la combinación de células madre mesenquimales e IGF-1 similar a la orientación establecida en el trasplante de médula ósea, en los que la terapia con células madre se combina con factores de crecimiento para restaurar las células de la sangre", dijo. "Creo que este tratamiento será posible comenzar las pruebas en pacientes en unos pocos años." IGF-I está actualmente aprobado para el tratamiento de niños con una deficiencia de esta hormona, que causa retraso en el crecimiento.

Otros que contribuyeron a la investigación son: Froilán Granero-Moltó, Myers Timoteo, Weis Jared, Longobardi Lara, Li Tieshi, Yan Yun, la sentencia de Natasha, y Rubin Janet.

El apoyo a esta investigación provino del Instituto Nacional de Diabetes y Enfermedades Digestivas y Renales, un componente de los Institutos Nacionales de Salud.

domingo, 5 de junio de 2011

El Crecimiento demográfico desmesurado creara ciudades superpobladas inhabitables en 20 años.




De aquí al 2030, un 60% de la población mundial vivirá en zonas urbanas, el doble de gente que en 1950, y un 22% más que en 2003, señala el Programa de la ONU.  Para entonces, y si no se toman las medidas necesarias, el aire de estos lugares será irrespirable.

Se estima que cada año en el mundo, 60 millones de personas se mudan a las ciudades y áreas urbanas, más de un millón de personas cada semana. En la actualidad, el 70% de los habitantes de los países en vía de desarrollo viven en el campo, mientras que sólo un 30% vive en ciudades. En el 2030, esta proporción se invertirá, con la consecuente superpoblación en las zonas urbanas.

De hecho, en el año 2005, se alcanzó un hito en la historia de la humanidad: por vez primera la mayoría de la población de nuestro planeta residía en las ciudades. El planeta contará ya para 2015 con 23 ciudades con una aglomeración de más de diez millones de habitantes, contra las 19 que teníamos en 2000. Un 80% de estas ciudades gigantes se encontrarán en los países en vías de desarrollo, tal y como ocurre actualmente

El desafío radica por tanto en construir urbes que puedan hacer frente a su superpoblación, Ciudades más saludables, que ahorren energía y agua y que puedan responder más rápidamente a situaciones de emergencia. Supone un nuevo comienzo del desarrollo humano, uno más sostenible y respetuoso con el medioambiente.


Para tener éxito, será necesario construir:

-        1- Ciudades con sistemas inmunológicos más saludables. Para disminuir el peligro de expansión de las enfermedades, por el incremento en la densidad de población. Para evitar las pandemias, los hospitales, escuelas, ayuntamientos y lugares de trabajo deberán detectar mejor, rastrear y prevenir infecciones, como la de la gripe A. Internet emergerá como una herramienta de información médica anónima que cada vez contendrá más datos que permitirán a la gente conocer y evitar ciertas enfermedades. 


 2-  Edificios con diseños que imitarán a los organismos vivos.
Hasta ahora, los edificios han sido construidos con sistemas de calor, agua, alcantarillado y electricidad independientes entre sí. La tecnología de las nuevas construcciones hará que éstas funcionen como organismos vivos capaces de detectar y responder rápidamente para proteger a sus habitantes, ahorrar en recursos, y reducir las emisiones de carbon.


3-    Coches y vehículos que  no funcionen con combustibles fósiles.

4-  Sistemas más inteligentes de gestión del agua y de la energía. Las ciudades habrán de afrontar el incremento exponencial de la demanda de agua (que se espera se multiplique por seis en los próximos 50 años).