Sunday, November 30, 2008

Lie Detectors: How Truthful Are These Devices?

The modern lie detector / polygraph test has said to have been used in proving guilt or innocence in offences as varied as petty theft to witnessing an “alleged” alien abduction. But are these machines truthful?


By: Vanessa Uy


The high-profile use of lie detectors / polygraph test machines in the US justice system range from the proving the guilt or innocence of rogue CIA agents to the credibility of alien abduction witnesses and victims. Even though majority of us know that these devices are used to determine whether the subject being tested is telling the truth or not, but can the machine irrefutably determine the guilt or innocence of the “test subject”?

In reality, polygraph test devices – or as it is more famously known colloquially as lie detector machines – measures how the subject reacts to the set of questions being asked physiologically. Whether the subject is lying or not is usually determined by the person supervising the test basing on the resulting measurements. One of the few manufacturers of purpose-built polygraph devices is the Lafayette Instrument Company in Lafayette, Indiana. The manufacturing firm makes polygraph devices that costs around 12,000 US dollars each. A typical polygraph – usually classified as a 4-pin device - has several modules that measures galvanic skin response – or GSR, the breathing rate via a pneumosensor, and the heart rate and blood pressure.

Newer digital / PC-based polygraph devices now exist (and are even way cheaper), but these are not as accurate as a purpose-build polygraph device. Though PC-based 4-pin polygraph devices has a proviso to store / save data digitally. Even though these types of polygraph does very well in their intended roles like measuring GSR, heart rate, blood pressure, breathing rate, etc. They cannot yet irrefutably determine the guilt or innocence of the person under test. That's why, lie detectors / polygraph test data are usually inadmissible in criminal court proceedings where the polygraph data is used to determine the guilt or innocence of the accused.

The latest form of these “lie detector” devices is called Brain Fingerprinting, which is touted to be more accurate than the current polygraph test devices in use. Developed by Dr. Laurence Farwell – a Seattle-based neuroscientist, Brain Fingerprinting is a radically new type of “lie detector” that has proven to have a more than 90% certainty rate in determining whether the subject is telling the truth or not. The newfangled system locks on to the P300 murmur response of the brain when the test subject is asked a well-selected roster of pertinent questions about the crime. The test subject’s brain response / brain wave patterns is measured via a sensor cap. At present, brain fingerprinting test results is not yet admissible as evidence in majority of US courts.

Despite of the advances in lie detection technology over the years, the US justice system is still weary of accepting polygraph test / lie detection data as evidence because the test results are open to interpretation. And lie detection devices somewhat violate the plaintiff’s constitutional rights against self-incrimination when such devices are used in criminal trial proceedings. Plus, it’s been proven that polygraph test devices / lie detectors are not infallible. Former CIA double agents / rogue agents Aldrich Aames and Howard Woodward “aced” their polygraph tests during the 1980’s even though the other evidence presented in their trials proved their guilt. Howard Woodward even manage to escape into the Iron Curtain more than 20 years ago and his whereabouts today still remain unknown despite the Cold War ending for almost two decades.

Thursday, October 16, 2008

Blue LED Water Purification

They consume very little power for the amount of light that they give off, but are blue light-emitting diodes or LED’s produce enough ultraviolet or UV radiation to kill water-borne bacteria to make water safe to drink?


By: Ringo Bones


I was skeptical at first given my first-hand experience and working knowledge of light-emitting diodes. But a research scientist at the Berlin Institute of Technology had recently claimed that he had developed a set-up to purify water – i.e. killing water-borne bacteria via ultraviolet radiation – using just an array of blue light-emitting diodes. If this works, it would start a new revolution on how we obtain safe drinking water. Given that blue LED’s are hundreds of times – even more – efficient than the mercury-vapor lamps currently in use to produce ultraviolet rays to kill water-borne bacteria and other pathogens as a way of making water safe to drink.

The blue LED water purification concept was aired on October 6, 2008 in a DW-TV science program titled Tomorrow-Today. Michael Kneissl of the Berlin Institute of Technology has demonstrated his blue LED water purification prototype set-up with claims that the blue light-emitting diode array produces enough UV radiation to “zap” harmful water-borne bacteria. If this is true, then Michael Kneissl probably made himself a Nobel Prize worthy concept given that mercury-vapor UV lamps currently used in this type of water purification are very power hungry in comparison to the (claimed off the shelf) blue light-emitting diodes that he used.

Theoretically, light-emitting diodes can last thousands of years – up to 150,00 years - if used well below their current limit ratings. If the Berlin Institute of Technology’s blue LED-based water purification system used current levels very near the limit of those rated for the blue light-emitting diodes, they would still last years compared to UV generating mercury-vapor lamps. If the concept goes on line, it will probably be the water purification method with the lowest carbon footprint given the energy efficiency of light-emitting diodes.

Monday, September 22, 2008

Fact Free Science: A Threat to Western Civilization?

Faced with ever diminishing educational budgets and the rise of extremist religious orthodoxy, is this cherished body of knowledge we call science in danger of dying off in our ever-complacent Western Civilization?


By: Vanessa Uy


Maybe it was one of James P. Hogan’s musings about “science” being too often referred to as this body of information that “everybody knows” because they heard it somewhere that got me thinking about why I find mainstream educators’ methods of teaching science wanting. I mean how often does anyone check out the original sources, or ask whether there might be “other” original sources reporting different results but getting less publicity?

Given the runaway success of the Discovery Channel’s Mythbusters, it seems that I’m hardly alone in questioning the “establishment’s” stance on what they define as science in the first place. Or could this be the raison d’être why the Mythbusters even attempt to evaluate – from time to time – “alternative technologies” like alleged perpetual motion machines, zero-point energy generators, and anti-gravity devices among other things. Sometimes I wonder if the Mythbusters Adam and Jamie are just doing this as a form of public service every time they try to debunk or redeem these “alternative technologies” on basic cable.

Recently, some scientists with more accredited accolades – compared to the number of vinyl LP s that I have - expressed dire warnings with regards to the September 10, 2008 experiments at CERN. About how the Large Hadron Collider could inadvertently create a black hole of sufficient strength to suck our entire planet into oblivion. Given that experiments like these had been performed even when there was a “Manhattan Project” to examine atomic structure. Every scientist – even those that don’t belong in the lofty domain of theoretical physics – should at least have a basic inkling that a mere “atom smasher” could destroy the Earth. Our Penning Trap technology is so primitive, we can – at present – only store a few atoms at a time of anti-matter for particle accelerator use. The resulting total energy output is only a bit greater than that produced by striking alight a match. Though within a few barns of the target’s cross section, temperatures similar to that found several thousandths of a second after the Big Bang is produced. Were still not even close to Captain Kirk era Star Trek technology were they can store several milligrams of anti-matter that is equivalent to a 600 megaton H-bomb for weapons use. Thus proving that tenured scientists are not immune from science myths and misconceptions currently permeating in the mainstream media.

Multi-billion dollar scientific experiments like these had always been a subject of scrutiny by conspiracy theorists, not only of the accusations of fleecing public funds, but also because of the difficulty of their reproducibility. Thus making it quite easy to question the resulting data of the experiments and cry foul due to the impossibility of a truly independent peer review. The growing popularity of the “Moon Landing Hoax” that accuses NASA of faking their manned trips to the Moon because it’s only the United States that could afford such multi-billion dollar scientific endeavors.

Friday, May 16, 2008

Radioactivity in Lead: Arresting Our Progress in Microelectronics?

Before it’s lead’s toxic effects to the human physiology that became an issue, now it’s the potential for too much alpha-particle emission. Will lead’s role in the electronic industry ever be less controversial?


By: Vanessa Uy


The Nobel Prize winning physicist Richard P. Feynman once said that a civilization’s technical prowess is gauged on how small they can built something or something similar like this but you get the picture. As our consumer electronics industry tries to design and built even smaller chip, they may find out that there’s a price to be paid in terms of device operational reliability. And they may soon reach their limit before it is imposed by the atomic structure of the semiconductor chips they are fabricating.

As an industry that prides itself on having enough time on their hands to ponder the sexier aspects of their work, the consumer electronics industry could be interpreted as so full of it whenever they ponder deep solid-state physics questions like how quantum-mechanical effects disrupt electrons. I mean how likely does the phenomena of electroweak interaction of Steven Weinberg and Abdus Salam affect the day –to –day workings of our consumer electronic goods? Well, they – the electronic engineers involved in mass-producing consumer electronic goods - can now get their hands dirty in tackling on what used to be a theoretical problem. Namely how alpha-particle emissions from lead isotopes affect the reliability of their latest microprocessor’s operation? But before we proceed, here’s a primer on where all of this “hot lead” came from.

The much heavier elements found on the Earth’s crust were created by our Sun’s larger and much heavier predecessor; after it went out into a blaze of glory by turning into a supernova. In the briefest fractions of a second before blowing itself up, our Sun’s predecessor’s nuclear processes created a host of heavy elements like uranium and lead which was then reused when our Solar System and everything in it came into being. This is why all the lead currently found on the Earth was produced when an unstable element like uranium radioactively decayed. Not all the lead that we manage to mine is stable it still contains isotopes – more radioactive versions of itself – still decaying into a more stable element. Only the long passage of time will reduce the amount of alpha-particle emissions.

The bad news is that these alpha-particle emissions can easily wreak havoc by increasing the incidence of errors in the chip circuitry’s operation. And this will only increase as electronic manufacturing firms fabricate finer circuits that are more sensitive to alpha particles. Not to mention lowering the operating voltage of the device in order to reduce power consumption will also increase the error incidents due to alpha-particle interference.

One very effective solution is to consider obtaining the lead used for the manufacture of soldering alloys from sources that are hundreds of years old like lead salvaged from old ships / shipwrecks. Or roofs of 1,000-year-old European cathedrals – any lead that is old enough that its atoms had already decayed into its non-radioactive end products. I consider this a very effective solution because the hi-fi manufacturer Audionote used a similar procedure in obtaining the silver to be used in their audio amplifiers. Audionote only uses silver that’s been out of the ground for at least 30 years. “The older the silver the better” - the company says because they are always mindful on how stray alpha particles affect the sound quality of their products. Though I wonder why only thirty years, did Audionote bought their silver from a mine that uses fission bombs to dig their tunnels since it takes about 30 years for most of the nuclear fallout’s radioactivity to die down? Like if enough strontium 90 is present in the silver used in your audio amplifier, you have other worse things to worry about than how alpha particles can degrade the sound quality of your audio gear. And besides, only half the amount of strontium 90 would have radioactively decayed into something else by 30 years’ time. But given the high level signal that Audionote’s audio amplifiers handle only makes me wonder if this is only a marketing ploy to allow them to jack-up their retail price. Nonetheless, alpha-particle interaction in super small computer chips will be a major issue in consumer electronic manufacturing circles much sooner than later.

Lead in Soldering: The Electronic Industry’s Weakest Link?

Ever since that worldwide movement to ban the metal lead from our everyday lives started very near the tail end of the 20th Century, consumer electronic manufacturing firms are busy searching for a replacement. Is this even feasible?


By: Vanessa Uy


Even though everyone’s fears about the heavy metal lead and it’s toxic effects on our bodies is not entirely irrational, many environmental pressure groups had been lobbying to anyone willing to listen to them for the total ban of the toxic metal lead from our everyday lives. Though an admirable goal, I really have some serious doubts about the practicality and feasibility of their lofty goals. Especially if these people are just lazily sitting back and not even formulating their own billion-dollar solutions.

Scandinavian countries have already eliminated the use of the toxic liquid metal mercury from all of their medical diagnostic instruments – i.e. thermometers – when the 21st Century came along. Legislating similar laws to phase out other “potentially toxic” substances from our everyday lives is easier said than done. Especially if our so called environmental pressure groups are already very much inebriated by the “poisoned fruits” of Web 2.0.

Take the soldering lead for instance. This humble tin and lead alloy is probably used by humanity for thousands of years, yet it is still an indispensable part of the consumer electronics industry. Especially when it comes to attaching microprocessors and other components to the circuit or PC board. It’s very likely that a majority of the passive consumers of our consumer electronics industry does not – and will not – give a damn about the miracles of lead-based soldering. Only the manufacturers and a dedicated few electronics hobbyists and DIY enthusiasts cares about how the lead content of our soldering is what help us perform those very tangible miracles we do everyday, even if we are the only witness to this miracle. The miracle of turning a fistful of wires and components into a full-blown symphony orchestra. Some even resort to monitoring the presence of lead in their bloodstream close to a daily basis.

There had been countless attempts over the years to replace lead-based solders in the consumer electronics industry. They range from very low melting point bismuth alloys, lead-free tin solders, even conductive polymers i.e. plastics that conduct electricity. So far, only bismuth and lead-free tin alloys have shown promise in replacing lead-based solders and even then these have their hosts of problems. Those bismuth-based alloys are even available in forms that will melt in warm water since they are originally used as triggering devices in fire suppression sprinkler systems. The only catch in using it is that bismuth based soldering alloys does not form strong joints to the components you are soldering to, unlike the proven reliability of lead-tin soldering alloys.

Lead free tin soldering alloys had been tried in the past for their potential in replacing lead-based soldering alloys. The problem with lead free tin solders is that they have a higher melting point than their lead-based counterparts, which increases their working temperature. The higher working temperature also increases the likelihood of damaging the electronic components that are to be attached / soldered on to the circuit board. Manufacturing “dry runs” had even resulted to the dreaded “pop-corn effect”, which occurs when residual moisture in the epoxy coating that shields an integrated circuit component vaporizes at the high temperatures needed to melt these newfangled lead-free solders. The epoxy then detaches from the chip device and pops open, which allows contaminants like airborne dust particles to enter and can even cause stresses in the coating.

Also a replacement for lead-tin solder is not cheap. An electronic industry insider even said that a viable replacement could cost the US consumer electronic industry alone upwards of a billion dollars annually, depending on the materials incorporated. Economics aside, the question now lingers on whether the volume increase in e-waste caused by unreliable electronic products failing is better than waiting for everyone to throw their lead-filled electronics to the trash heap 80 or a hundred years from now. Which do you think is more environmentally friendly?

Wednesday, April 23, 2008

Robotic Suits: Saving Lives on the Frontline?

Despite the concept behind the projects latest incarnation probably dates back to the 1970’s. Will robotic suits fulfil its much- touted role of providing much needed efficiency and “harm reduction” on the battlefield?


By: Vanessa Uy


The mainstream media’s current interest of this project was partly influenced by the upcoming movie Iron Man, which is based on the popular Marvel Comics superhero. Sarcos Designs, a manufacturing company based in Utah, developed the latest version of robotic suits for test demonstrations and for possible later use by the US Army. The prototype robotic suits could allow each soldier – as proven in earlier test results – using the suit to lift 1,000 pounds worth of gear.

Like the rationale behind Richard J. Gatling’s invention of the Gatling Gun, the robotic suits were touted primarily save human lives on the front line by eliminating the need of unnecessary personnel. So those involved in the drudgery of heavy lifting are very much the same persons who willingly volunteered to be exposed to hostile fire i.e. the soldiers themselves. Also the robotic suits could save time and money since they are now fewer people doing jobs that used to require scores of them to get done.

The concept behind Sarcos Designs’ robotic suits was actually tested back in the time when the Black Sabbath song Iron Man was still in regular airplay by every popular FM stations across America. Human factors engineers were experimenting back in the 1970’s a wearable steel skeleton with a sophisticated control system which enabled US soldiers to pick up 1,000- pound loads. Known as the Man Amplifier project, it allowed the operator wearing the suit to lift tremendous loads just by using his regular movements. When the operator touches and lifts an object, the wearable steel skeleton transmits the pressure to him or her. When the operator responds to these signals, the steel skeleton senses the muscle action, follows it exactly, and adds the powerful push of its hydraulic motors to “amplify” the operators lift action that allows him or her to lift tremendous loads. Back then, the design engineers have the dexterity of the operator in mind. Given that their device has a repertoire of seven variations of elbow and shoulder movements, this allows the operator to be able to climb stairs and ladders.

Despite relative successes of the prototype, wearable robotic suits never gained widespread use because of power source issues and the technology's apparent demand didn’t justify the somewhat steep development costs incurred by the project. So the project was shelved for another time because the problem that these robotic suits intend to solve could be done cheaply by other means. Like cheap labor from illegal migrants to put it bluntly.

But wearable robotic suits that amplifies a persons lifting capability did gain widespread use, albeit in the world of science fiction. Lt. Ripley (played by Sigourney Weaver) in the movie Aliens used a similar device to jettison a hostile alien life form into space near the climactic end of the movie. Given the recent advances in electric motor and battery design, the Sarcos Designs’ robotic suits could take advantage of this especially the availability of small sized high- powered lithium-ion batteries. This could make robotic suits an indispensable tool of the US Armed Forces within ten years time, given the current urgency of the need for such technology.

Wednesday, March 26, 2008

The European Space Agency’s ATV: A Billion - Dollar Trash Bag?

Slated to replace the aging American Space Shuttle fleet by 2010, is the European Space Agency’s Automated Transfer Vehicle nothing more than a multi billion–dollar garbage bin?


By: Vanessa Uy


The European Space Agency’s latest contribution to the International Space Station program is an unmanned spacecraft that would serve as a re-supply vehicle for the orbiting ISS. Named after the visionary 19th Century French Science Fiction author the Jules Verne Automated Transfer Vehicle will serve as a replacement for both of the aging spacecraft assigned in re-supplying the International Space Station. Namely the American space shuttle fleet and the Russian unmanned Progress re-supply spacecraft that hails back to when there was still a Soviet Union.

The ATV was totally developed in house by the European Space Agency. To keep the development costs under two billion dollars, the E.S.A. designed the Jules Verne ATV to be burned up upon reentry into the Earth’s atmosphere. When completed, it will be launched by a modified Arianne Rocket / Launch Vehicle from the Korou Spaceport in French Guyana to the International Space Station. The equatorial position / location of the launch site is primarily due to the fuel savings that can be incurred by harnessing the speed of our planet’s rotation to help boost space vehicles to favorable orbits.

As the Jules Verne ATV enters into service, supplies from the Earth destined to the ISS will be loaded to it. Since the ATV is fully automated to dock with the ISS using proprietary GPS and laser based technology developed in-house by ESA, the ATV will thus be able to perform its intended mission with minimal or no human intervention at all. After the scientists / astronauts maintaining the ISS receive their supplies, wastes, expendable flotsam, and other used material from the ISS will be loaded into the Jules Verne ATV. Then the unmanned spacecraft will be sent on a trajectory to burn-up upon reentry to the Earth’s atmosphere.

By allowing the “Jules Verne ATV” to burn-up in the Earth’s atmosphere once it’s usefulness is over seems – to me at least – an utter waste of a quite expensive technology. Representatives of the European Space Agency says that to make the Jules Verne ATV reusable would incur excessively expensive development costs and the spacecraft would become prohibitively expensive to regularly operate. But in my opinion, the E.S.A. ’s decision to burn-up the spacecraft after its mission is completed is but a symptom of some country’s resentment against the West. Since Western nations / entities like the E.S.A. - the sole space ferrying entities on this planet – choose to make the Jules Verne ATV “expendable”. This only reinforces the nagging notion in my head that these Western Powers think that astronauts doing an emergency landing in “unfriendly territories” like Iran, North Korea, or Taliban-controlled territories in Afghanistan is deemed “unthinkable” by “Western Powers”. And they – the non-aligned nations like Iran - are concerned about the increasing “militarization” of space?

Friday, February 15, 2008

The One Laptop Per Child Program: The Politics and the Bureaucracies

With two major competing programs known so far. Will the current one laptop per child program really help children in developing nations prepare for future I.T. jobs, or will the two competing programs devolve into a commercialism turf war similar to the VHS and BETAMAX War of the early 1980’s?


By: Vanessa Uy


Despite over-extensive press coverage, a lot of us “nettizens” never seemed to have lost interest on the promises and the problems surrounding the one laptop per child program. As of late, there are two major programs all rivaling the merits for their raison d’être like fiscal sensibility, technical feasibility and sheer practicality. The two programs are currently field tested on a scale to accommodate the need of a typical school in Nigeria to gauge the success – or failure – of the program.

One “version” of the program is the brainchild of Nicholas Negroponte. One of the aims of his one laptop per child program is to provide a bridge that would span the gulf of the existing “Digital Divide” that exists in developing countries. Another aim of Nicholas Negroponte’s program is to promote computer literacy in the poorest parts of the world. The computer laptops used in Nicholas Negroponte’s pilot scheme costs a little over a hundred US dollars each, they’re Internet / Mesh Network capable and can send video and still pictures to the Internet via it’s built-in webcam. If it succeeds, the program would serve as an irrefutable proof of the modern computer’s feasibility as an educational tool even in developing countries. One of the program’s more intransigent problems is the endemic lack of a steady supply of mains / grid electricity in developing countries. This problem can be solved by using a rip – cord operated generator similar to those used in those portable radios that are distributed throughout Africa during the 1990’s to help broadcast information in preventing the spread of HIV / AIDS. Though equipping the laptops with such generators would increase their price, there are also plans for solar / photovoltaic chargers for the laptops built-in batteries. Despite of the problems, the hands on / try something / creativity promotion proviso of the laptops has been one of the most redeeming qualities of the program. By training their problem solving skills, the laptops have become a very positive educational influence to the students despite of Nigeria’s rigid “old school” tradition of educational hierarchy that new knowledge and skills should flow only one way – from the teacher to the students.

The other one of these one laptop per child program that rivals Nicholas Negroponte’s is being run by the Intel Corporation, and is called the Intel PC classmate program and is tried on an another school in Nigeria. The Intel PC classmate program according to Intel is about investing in school kids (Tapping the knowledge economy?). The laptops that are provided by Intel to the students currently costs 350 US dollars each. The reason Intel’s laptops are more costly is because of the extensive use of solid - state flash memory technology in their laptops. At present, solid – state flash memory technology is much more expensive than conventional data storage devices like hard drives and CD / DVD burners. But solid – state flash memory devices can work much more reliably than their “conventional” counterparts in the arduous conditions typically found in the environment where the laptops could be used like dust, moisture, and the shock forces produced when the laptop is “accidentally” dropped. The Intel Corporation says their program is investing on Nigeria’s children by “grooming” them to acquire skills as future I.T. employees. Thus making the children’s job prospects in the future much more secure.

From my point of view, both programs are really visionary in tackling the current problems that can be encountered when developing countries try to improve their educational system. Will the promise of both programs remain but a dream when faced with the harsh realities of the high cost of upgrading the telecommunications infrastructure of developing countries just to make them Web 2.0 compliant, and what about these countries electrical grid infrastructure? Plus, let’s not forget that most developing countries like Nigeria is still currently trying to upgrade their existing “conventional” educational system just to provide basic literacy skills – which includes the English language by the way – which are a pre – requisite to computer literacy.

Even though both of the one laptop per child program is already 5 years old. Both of the programs original “mission directive” was to alleviate the “lack of qualified teachers” problem in developing countries by allowing financially disadvantaged kids access to the vast stores of knowledge that’s available on the Internet. Despite of current technical problems like status of the local telecommunications and power grid infrastructure, plus the politics of censorship that’s recently discussed by this year’s Internet Governance Forum (IGF) held in Rio de Janeiro, Brazil. The greatest benefit that the one laptop per child program will be to the environment because unnecessary air travel will be kept to the absolute minimum. This is so because NGOs and program overseers can track the progress of there respective “pet projects” on-line because the kids are uploading the video documentation of the program’s progress. Who knew that something that started out as an educational program is now a part of the solution in reducing our overall “carbon footprints”?

Wednesday, January 16, 2008

Hybrid Cars versus Electric Cars: Vying for Green Credentials?

The “Two Major Roads” that lead to a more environmentally friendly motoring are clamoring for our votes, which one will win and which one will you vote for?


By: Ringo Bones and Vanessa Uy


The two emerging technologies that serve to power a new generation of environmentally friendly cars – namely “hybrid power plant cars” and “pure electric powered cars” – are now clamoring to prospective customers who vote with their wallets and / or checkbooks. Marketing success hinges more on which of the two technologies will be adopted by the major auto- makers; Irreproachable “green credentials” is now a major issue that determines which of the two will sell, and to a more or lesser extent; simplicity of operation and running costs. So here are the merits and faults that accompany the two different technologies.

Ever since the conspiracy theory surfaced to the mainstream media surrounding the “demise” i.e. product recall of GM’s EV1; The theory states that General Motors was under behest by the “1996 Republican Majority Congress” in collusion with “Big Oil Companies” to “kill” the EV1 because it’s “miraculous” performance could end America’s dependence on “Middle Eastern Petroleum.” Back in 1996, GM’s EV1 was the first pure electric car produced in commercial quantities by a major automobile company. It had pretty good credentials under its belt despite being powered by heavy and “inefficient” lead-acid batteries that could pose its own environmental problems. Fully charged, the EV1 has a range of 65 miles.

A lot has happened since then, today, a car that was referred to as the spiritual descendent of the EV1 is the TESLA Roadster. The TESLA Roadster is made by TESLA Motors a small automotive start-up company in San Francisco, California. One advantage that the TESLA Roadster has over GM’s EV1 is weight – or the lack of it. The TESLA Roadster is constructed out of carbon fiber that’s modeled after the Lotus Elise so it’s five times lighter than ordinary steel cars and also five times stronger due to the carbon fiber construction. The TESLA Roadsters claim to fame is it’s advanced lithium ion / lithium polymer battery that’s not only several times lighter than the one’s used in the EV1, it is also more efficient allowing the TESLA Roadster to have a range of 250 miles on a single charge. Because of the carbon fiber construction and lithium batteries, the TESLA Roadsters high power –to – weight ratio allows it to accelerate like a high- end conventional internal-combustion-engine-powered-gasoline-fueled racecar.

In the other camp: hybrid cars i.e. cars whose both powered by a fossil-fueled internal combustion engine and storage batteries that drive the electric motors. The environmental merit of hybrid cars is that the internal combustion engine can be made smaller than that of “conventional” cars because it’s primarily used to recharge the batteries, thus generating lower emissions of carbon dioxide and other pollutants. Hybrid cars also have better “mileage” than “conventional” cars because only in demanding situations i.e. going uphill and/or when quick accelerations are required that the two power plants is used in conjunction with each other. The most famous and highly advertised make/model of a hybrid car is the Toyota PRIUS.

While hybrid cars are praised because theoretically they could never ran out of “juice” while on the road due to the current ubiquity of gas / petrol stations over electrical charging stations. Pure electric cars – especially ones using the latest generation of lithium ion batteries – have better performance due to their high power –to – weight ratio compared to current hybrid cars. Also -if major auto makers will start mass producing them again- pure electric cars have the advantage over hybrids in terms of environmental friendliness because it’s much easier and cheaper to place air pollution mitigating devices at the power plant as opposed to every tailpipe of every car that’s running. Borrowing from the “transistor- principle” that a system with fewer moving parts is less prone to breakdown. Pure electric cars has this advantage because it uses only simple electric motors as a primary “engine” as opposed to the hybrid car that still has a conventional internal combustion engine with an inherently inefficient –in energy terms- clutch and gear drive systems. Also pure electric cars can easily tap electricity that’s produced from sustainable and / or non-carbon dioxide generating power plants like wind farms, solar photovoltaic power plants, fuel cell based power plants, etc. Also in the not-so-distant future, carbon offsetting might be legislated to include the transportation sector. Your carbon dioxide generating hybrid car could be singled out by the taxman in the coming years. Also, hybrid cars have “dubious” resale value as reported by Jeremy Clarkson in the 2003 – 2004 season of “Top Gear” an automotive TV show reviewing budget and high-end cars. On one episode, he advises against buying a hybrid car and to choose instead on a conventional car with a better mileage because this fuel- efficient conventional car is not likely to end up lying idly on some junkyard compared to its “hybrid” competition.

So What Is This Polonium Business Anyway

Since the sensational media focus on Alexander Litvinenko’s assassination by polonium 210 isotope poisoning, the element not only gained a much needed fame but also notoriety, despite the general public’s ignorance on the legitimate uses of the element.


By: Ringo Bones and Vanessa Uy


So what is polonium by the way? First let us examine it from a rational point of view. Back in 1898, when the groundwork for 20th Century nuclear physics was already underway, Pierre and Marie Curie did some experiments with pitchblende, an ore where they extracted the element uranium. The Curies found out that pitchblende was more than four times more radioactive than uranium on a pound-for-pound basis. Armed with this finding, they concluded that pitchblende must contain unidentified elements more radioactive than uranium. As uranium was discovered before, the Curies took the opportunity to explore the yet unknown properties of pitchblende. Laborious chemical separations of the constituents of pitchblende were carried out, resulting of the discovery of two new radioactive elements by the Curies in 1898: radium and polonium.

Despite the resulting fame in honor of their work on radium and polonium, Marie Curie and her daughter Irène and son-in-law Frédéric, all died as victims of the effects of radioactivity. Even their notes, after all this time, can only be handled behind a radiation proof glass, associated shielding and robotic arms used to handle highly radioactive materials. A “testament” to the persistence of radioactive contamination.

Technically, the chemical nature of polonium is known largely from observing extremely small amounts of the element through chemical reactions via radioactive-tracer techniques in which polonium is mixed with tellurium as a coexisting reactant. The available quantities of natural polonium that can be used for scientific study, is extremely small: over 25,000 pounds of pitchblende ore must be refined to obtain just a gram of polonium. Since the half-life of the most abundant isotope, Po210, is only 138.7 days, thus its scarcity is inevitable. One method of producing the isotope for industrial use is by bombarding bismuth209 with neutrons to form bismuth210, which decays by the loss of an electron to give polonium210.

Polonium has legitimate uses. Our favorite use for it is in removing dust from records/vinyl L. P. s (We still have them, we still use them, we still love them and they sound way, way better than CDs, I-pods or downloads especially on snare drums and cymbals!). Using “Nuclear Products Company 3R500 Staticmaster” this is a polonium-treated jaguar-hair brush that eliminates static and dust from records. We swear by this domestic static electricity neutralizer. You might criticize us for using a cancer causing apparatus composed of an endangered species material. As Michael Fremer expressed his sentiments on the February 1998 issue of Stereophile: “When clean records are at stake, who cares?” Remember the 139day half-life, the “Staticmaster” needs “recharging” from time to time.

By the way, polonium is also used as an alpha - particle source for scientific use. Since alpha - particles have very weak penetrating power. They can’t even go through a piece of paper thus polonium is only dangerous when taken internally either by ingestion, inhalation, or injected into the human body.

We hope that the incident with Alexander Litvinenko doesn’t make our powers-that-be legislate irrational laws brought about by fear and lack of understanding of the element polonium.

Thursday, January 10, 2008

Cold Fusion, An Alternative Energy Crank

Had it worked, cold fusion could have been the magic bullet mankind’s been looking for to solve our energy and environmental crisis.


By: Ringo Bones


Generating energy via nuclear fusion is a piece of cake-provided you can build a reactor that can generate temperatures hotter than the sun’s interior without destroying itself in the process. If somehow you could do fusion at room temperature, you could say hello to unlimited clean energy and goodbye to greenhouse gasses and radioactive wastes for good.

1989 was a very exciting year for those of us who grew up under the shadow of the “Cold War.” Détente was declared between the United States and the then Soviet Union, the tearing down of the Berlin Wall, and had it been true, the discovery of cold fusion.

Back then, B. Stanley Pons, professor of chemistry at the University of Utah, and his colleague, Martin Fleischmann of the University of Southampton in England, were credited for supposedly discovering cold fusion. They touched off a furor by announcing with great fanfare that in March of 1989 in Salt Lake City that they had achieved nuclear fusion-a process that would have required multimillion degree temperatures-in a set up consisting of a jar of water at room temperature. As they claimed, this so called cold fusion manifested itself when an electric current was passed through a palladium electrode immersed in “heavy water” i.e. water whose hydrogen atoms are made up of deuterium, a heavier isotope of hydrogen that’s commonly found on water. The Utah team of scientists noted that the palladium absorbs deuterium atoms, which at an atomic level are forced to fuse together, producing heat and neutrons. The hope for an unlimited source of cheap and clean energy was at stake, but there was one big conundrum.

One of the precepts that the “scientific method” prides itself in is that experimental procedures can be duplicated by evaluating scientists and the resulting data is reproducible i.e. the data obtained should be the same and should deviate only within a prescribed limit. But Pons and Fleischmann were vague about how their “cold fusion reactor” worked. And when other scientists tried to duplicate the pair’s results, all they got was mostly cold water for their troubles. As a result profound skepticism among physicists was growing. As time went on, there was an intensive cold fusion research effort involving more than a thousand scientists and an estimated daily expenditure of US$1million.

The patent holder of the cold fusion process, the University of Utah, allowed it to lapse as cold fusion fell from view. Until this day B. Stanley Pons and Martin Fleischmann still continues their work on cold fusion albeit separately and quietly.

Last time cold fusion got major media exposure in the 20th Century was on the movie “The Saint.” Even in the new millenium, news coverage suddenly emerges from time to time that most people now has formulated a stereotype on someone will likely discover cold fusion; Usually a young disadvantaged scientist in bad need of legitimacy and political support. Who is under-30 with a lab set up in a barn somewhere in the “grain belt” of the United States. We live in hope that the next time we hear about “cold fusion” on the news, it will be the real thing.

Ecological Technology

Can we solve our current energy crisis in an ecologically friendly manner? Can we gain better understanding on ecological systems when viewed from a technological perspective? The answer is a big yes but only if our intellect is up to the challenge.


By: Ringo Bones and Vanessa Uy


Ecology, the branch of science that deals with the interactions of living organisms and their environment, a term derived from two Greek words which mean ”the study of the home” while technology is the totality of the means employed to provide objects necessary for human sustenance and comfort.

Since humans are the dominant “life form” on this planet, and are viewed as the cause célèbre for all of our ecological problems. Paradoxically, it is us that can only solve the problems that we create in the first place. One of the problems that we face today is our increasing demand for energy generation that is not necessarily environmentally friendly to begin with. How we go about solving this must go hand in hand on how we will protect our environment just to keep our planet habitable in the future. All the energy that mankind utilizes, whether renewable or not, all come from nature. Only a handful of scientists like R. Buckminster Fuller view ecosystems as an interrelationship between matter and energy or more aptly living organisms and energy.

All ecosystems are governed by: “The Laws of Thermodynamics”, this is the relationship between matter and energy in a system. The First Law of Thermodynamics states that “the sum total energy in a system is constant” i.e. energy can neither be created nor destroyed. The Second Law of Thermodynamics states that there is a tendency toward entropy or maximum disorganization of a structure and the loss of usable energy. These laws prevent us from formulating an easy solution to our energy problem in an ecologically friendly manner. But first, let’s check out how nature manages energy to sustain an ecosystem.

In autotrophic based ecosystems, the energy that is stored through net primary production by photosynthetic organisms is used to support higher trophic levels. Energy flows only one way through these levels with decreasing amount at each level. The energy that is captured by the autotrophs (photosynthetic plants) does not revert back to the sun. And also; what energy that flows to the herbivore does not flow back to the photosynthetic plants, and so on, as it moves through the various trophic levels, energy is no longer available to the previous level. The important implication of this unidirectional flow of energy in an ecosystem is that the system would collapse if the primary source of energy, like the sun is cut off.

The next major fact to be noted is the progressive decrease in energy at each trophic level. This fact can be explained by the energy lost as heat in metabolic activity and manifests here as respiration. This particular ecosystem also has a large amount of unutilized energy. Even if more of this “unutilized energy” is being used in a more efficient system, there would still be considerable loss due to respiration. Thus, even with more efficient energy utilization, considerable energy would still be required to maintain the system.

These factors-Unidirectional energy flow and inefficient energy utilization-account for the requirement of a steady stream of energy to avoid the collapse of an ecosystem. An ecosystem simply cannot itself when deprived of a source of energy input for an extended period of time.

To know more about this energy flow or how “Mother Nature” does energy management on ecosystems, Vanessa and I studied R. Buckminster Fuller’s thesis about “energy and wealth.” At first we thought that we came to a wrong conclusion. At present, most college physics students are taught the idea that the energy of a closed system remains constant, but as time goes on its entropy always increases. That is, natural processes always tend toward states of increased disorder. Based on what they’re taught, those college students could conclude that what humanity’s been doing is using up our available sources of energy at a rate greater than the ability of our technology to make new sources available. Until solar energy is in use on an everyday basis, humanity had better hang on to our oil, coal, natural gas, and wood.

The more we studied R. Buckminster Fuller’s thesis, the more uncomfortable we felt on his rejection on the second law of thermodynamics as a universal principle. This rejection is based on his own axiom that there are no closed systems-that closed systems; like straight lines or bodies at rest, are like obsolete Aristotelian concepts that hinder, rather than help, our understanding of the universe.

Fuller’s synergetic-energetic geometry is still debatable, of course, and it will probably take another generation of experiments and research before his position on the second law of thermodynamics is truly confirmed or refuted. However, a modification of that law has become generally accepted-and if “most college physics students” does not know about this, most graduate physics students do (like Ringo), this concept was only known to graduate physics students because it is a relatively recent finding. Only college physics students who go out of their way and follow closely the latest trends in the advancement of thermodynamic research can know about this. This refers to the development of general systems theory, which redefines both closed and open systems. While closed systems follow the second law precisely, and entropy increases within them, making less energy usable, open systems operate without this restriction, so that negative entropy (negentropy) may increase, making energy more usable.


As L. Brillouin wrote in American Scientist in 1949:


The second¬ [law] means death by confinement…Many textbooks, even the best of them, are none too cautious when they describe the increase of entropy…The theory of relativity, and all the cosmological, quantum mechanical theories that followed…involve a bold revision and drastic modification of the laws of thermodynamics…The earth is not a closed system…The sentence to “death by confinement” is avoided by living in a world that is not a confined and closed system.


Of course, this does not deny the existence of an ecological problem. It’s because the scientists concerned wish that this problem should be understood correctly, as a misuse of technology, like the increase of “greenhouse gasses” in our atmosphere, rather than a consequence of an inescapable human law. This law is a product of our current understanding of the universe, that Fuller and others have emphasized so urgently that there is nothing in thermodynamics that makes the growing ecological disaster inevitable.

So what does all of this suppose to mean? First the ecological structure of our planet is quite complex that it is very easy for the powers- that- be like industrialist and politicians with the help of scientists in their payroll to refute the existence of global warming. They do this by stating that our current knowledge of the planet’s ecosystem is insufficient or flawed and to contradict to this would take research and experiments that would take so much time and money as to be an anathema to the shaky relations between science and politics. Second, we cannot stop technological progress. The Genie is out of the bottle so we have to deal with it rationally. One viable solution to this problem is to move our less ecologically friendly industries out into space, thus the urgent need for “green energy” to escape the earth’s “gravity well.” This in turn will make it easier for us to turn the entire planet as a nature preserve with us humans as an integral part of it.

First Woman in Space

Was the Soviet Space Program a milestone for feminism or an esoteric footnote in history to be gawked at by academics?


By: Vanessa Uy


In today’s world where feminism is a living / breathing ideology, Why is it that virtually no one knows who is Valentina Tereshkova. Most feminists worth their salt within a stone’s throw from me don’t even know her. Even more surprising is that a majority of those who knew her exploits are men over 32. Isn’t that weird? She started working as a mill hand in Soviet Russia. Then probably served the mandatory required military service, which is quite common in the former Soviet Union. During her military service, Valentina Tereshkova became a skilled skydiver which didn’t go unnoticed by the powers- that- be in the Soviet Space Program. On June 1963, the then 26- year- old Valentina Tereshkova became the first woman in space almost 20 years ahead of the next woman astronaut, an American named Sally Ryde. Valentina Tereshkova made 48 orbits in the Vostok VI spacecraft. Later she became the bride of cosmonaut Andrian Nikolayev.

In the beginning of the 20th Century, almost yesterday in terms of advancement of women’s causes, feminists in England are brandishing their allegiance to Marxist-Leninist Socialism in the hope of advancing their cause. Isn’t Valentina Tereshkova the proof of Socialism’s amicability with feminism or is she just a casualty of the Catholic Church’s exercise of “Posse Comitatus” on Left-leaning views?

“Who Really Invented Radio?”

By: Vanessa Uy


The working title of this article should have been “Please, For The Love Of God Tell Me Who Invented Radio!” If you’re among the sorry, countless individuals who thought that Marconi solely invented radio, this article is not for you. For those with a passing interest for Nikola Tesla, you would find this either enlightening or a bit humbling. So without further ado, let me take you on a journey.of exploration.

Our story starts in the latter half of the 19th Century. The chaps, most of them from the United States, have a very interesting story on how they invented radio. One of them is Nathan B. Stubblefield of Murray, Kentucky. He began working on experiments and devices related to radio as early as 1892. His notable public demonstrations, like the one he performed on May 30, 1902 did not go unnoticed by serious publications and journals like Scientific American. But today this Murray, Kentucky native is a relative unknown to anyone not from his hometown.

At about the same time, Dr. Mahlon Loomis created a crude tuned-antenna circuit. Despite his prolific genius, he never received the grant he sought from congress. If he did, the invention of radio might have advanced a few decades. Even in Virginia, Dr. Loomis is probably known only to history buffs.

Two electricians that are being conveniently left out by the Tesla advocates are Oliver Lodge, whose patent anticipated Tesla’s in 1898 and John S. Stone, a month earlier than Tesla in 1900.

In 1880, Alexander Graham Bell created a device called a photophone. It worked by using a voice signal to modulate a light beam, but this was never more than a technological object d’art exhibited at world fairs. It’s the same principle behind fiber optic laser telecommunications.

One of more significance was the wireless telephone patented in 1886 by Amos Emerson Dolebear, a physics professor who demonstrated it publicly in the United States, Canada, and Europe. At about the same time, John Trowbridge at Harvard was doing extensive experiments in both induction and earth-or water-conduction wireless apparatus. Thomas Edison, the noted superstar inventor and de facto anti Tesla, developed wireless telegraph / telephone systems to communicate with moving trains during the 1880’s. Granville Woods and Lucius Phelps also developed a similar wireless communication system.

A chap called Alexandr Popov, who the Russians claimed invented radio, is also a viable candidate. When the former Soviet Union launched one of her first space probes to explore the far side of the moon. A crater was named after him.

When the United States Supreme Court entered into “The Great Radio Controversy” in October of 1942. A can of worms was opened, luckily its influence only affects history academics and Tesla fans. Though the invention of the radio had long been famously attributed to Gugliemo Marconi, the Supreme Court justices were intrigued by patents and scientific publications which pointed to Nikola Tesla as radio’s true creator. In June of 1943, the Court decided that Nikola Tesla had, in fact, invented modern radio technology. They ruled that Marconi’s patent were invalid and had been “anticipated.” Tesla was vindicated-though far from victorious. Some five months before, alone and destitute in a New York hotel room, the great inventor had passed away. His papers and notes were confiscated by the United States Alien Property Office, and are now housed in the Nikola Tesla Museum in Belgrade, Yugoslavia. I hope to visit there someday.

It’s not easy, but basing on existing proof. I pick Nikola Tesla as the true inventor of radio. To me Stubblefield, Loomis, and Lodge, as well as the others mentioned still await more proof in order to rise above their present status as mere “hometown heroes.” Which is also the similar predicament of Alexandr Popov.

Despite having a heavy metal band named in his honor and being portrayed by David Bowie in the magic show movie called “The Prestige”, Tesla is still a relatively unknown genius even today. Ask most accomplished electrical / electronics engineers today about who invented radio and most of them will answer “Marconi.” It’s one of those things that make you a bit sad, doesn’t it?