Sunday, 30 September 2012

Brain Implants, a brief history and possible future

When you talk about brain implants the words conjure up an invasive surgical procedure of  cutting open the skull to allow electrodes of some kind. Technology now provides miniaturization and better interfaces between mind and machine. But without research there never would be the possibility to regain sight or hearing or cure certain diseases, I consider past exploits to predict the future of brain implants.
Historically in1870, Eduard Hitzig and Gustav Fritsch demonstrated that electrical stimulation of the brains of dogs could produce movements. Robert Bartholow showed the same to be true for humans in 1874. By the start of the 20th century, Fedor Krause began to systematically map human brain areas, using patients that had undergone brain surgery. Prominent research was conducted in the 1950s. Robert G. Heath experimented with aggressive mental patients, aiming to influence his subjects' moods through electrical stimulation.
Yale University physiologist Jose Delgado demonstrated limited control of animal and human subjects' behaviours using electronic stimulation. He invented the stimoceiver or transdermal stimulator, a device implanted in the brain to transmit electrical impulses that modify basic behaviours such as aggression or sensations of pleasure. Delgado was later to write a popular book on mind control, called Physical Control of the Mind, where he stated: "the feasibility of remote control of activities in several species of animals has been demonstrated  The ultimate objective of this research is to provide an understanding of the mechanisms involved in the directional control of animals and to provide practical systems suitable for human application."
In the 1950s, the CIA also funded research into mind control techniques, through programs such as MKULTRA. Perhaps because he received funding for some research through the US Office of Naval Research, it has been suggested (but not proven) that Delgado also received backing through the CIA. He denied this claim in a 2005 article in Scientific American describing it only as a speculation by conspiracy-theorists. He stated that his research was only progressively scientifically-motivated to understand how the brain works.
The end of cold war and psychosurgery in decline due to the introduction of new drugs and a growing awareness of the long-term damage caused by the operations as well as doubts about its efficacy. Brain implants seemed to be tarnished with the idea of thought control and thought manipulation.
The Food and Drug Administration (FDA) approved Deep Brain Stimulation  (a surgical treatment involving the implantation of a medical device called a brain pacemaker, which sends electrical impulses to specific parts of the brain), as a treatment for essential tremor in 1997, for Parkinson's disease in 2002, and dystonia in 2003. DBS is also routinely used to treat chronic pain and has been used to treat various affective disorders, including major depression. While DBS has proven helpful for some patients, there is potential for serious complications and side effects.
Mean while recently, scientists have demonstrated that a brain implant can improve thinking ability in primates. By implanting an electrode array into the cerebral cortex of monkeys, researchers were able to restore and even improve  their decision-making abilities. The implications for possible therapies are far-reaching, including potential treatments for cognitive disorders and brain injuries. But there's also the possibility that this could lead to implants that could boost your intelligence.
Researchers from Wake Forest Baptist Medical Centre, University of Kentucky, and University of Southern California took five rhesus monkeys and trained them on a delayed match-to-sample task.
Once they were satisfied that the correct mapping had been done, they administered cocaine to the monkeys to impair their performance on the match-to-sample task (seems like a rather severe drug to administer, but there you have it). Immediately, the monkeys' performance fell by a factor of 20%. It was at this point that the researchers engaged the neural device. Specifically, they deployed a "multi-input multi-output nonlinear" (MIMO) model to stimulate the neurons that the monkeys needed to complete the task. The inputs of this device monitored such things as blood flow, temperature, and the electrical activity of other neurons, while the outputs triggered the individual neurons required for decision making. Taken together, the i/o model was able to predict the output of the cortical neurons — and in turn deliver electrical stimulation to the right neurons at the right time.
The researchers successfully restored the monkeys' decision-making skills even though they were still dealing with the effects of the cocaine. Moreover, when duplicating the experiment under normal conditions, the monkeys' performance improved beyond the 75% proficiency level shown earlier. In other words, a kind of cognitive enhancement had happened.  While in June 2011, Second Sight received CE mark of approval for the Argus II Artificial eye meaning for £53,000 you can now buy the system. It is the first bionic eye to be approved for sale anywhere in the world.
And Zheng Xiaoxiang of the Brain-Computer Interface Research Team at Zhejiang University in Zijingang, China, and colleagues announced that they had succeeded in capturing and deciphering the signals from the monkey's brain and interpreting them into the real-time robotic finger movements.
More successfully Braingate pioneer Professor John Donoghue said they had implanted an aspirin-sized electrode array on to the patients’ motor cortex – the part of the brain that controls movement. The technology is a neural interface system called BrainGate2, currently in clinical trials, which connects Cathy’s brain to a robot. The device is the result of over 10 years of research at Brown University and an extension of the first BrainGate in 2006, which allowed patients to control a computer cursor on a screen. Despite early attempts to control the human mind using Psychosurgery for controlling mental disorder, early research proved helpful to map out the human brain. But only recent experiments proved success with implants for brain machine interfacing. Even today an artificial eye has limited resolution and connections to the motor cortex can perform slow maneuvers.
Further research and miniaturization of electronics would ensure sophisticated bio electrical connections. Emerging technologies such as flexible circuity, which can operate within the body. Research at MIT has produced a fuel cell that could power small neural implants with the same source of energy as the brain itself: glucose. Engineers created a fuel cell that breaks down the ubiquitous sugar molecule much the same way as the body does, and it could enable a new generation of self-sustaining medical devices.
The project, led by MIT associate professor Rahul Sarpeshkar, was created with brain implants in mind, with the fuel cell tapping the glucose-rich cerebrospinal fluid that surrounds the brain and fills its cavities. And it's designed to allow electronics to be connected easily, as the fuel cell is itself embedded on a silicon chip that could easily be modified for different applications. The power it generates isn't much: up to 180 microwatts per square centimeter at maximum, but only a modest 3.4 microwatts can be counted on for a steady current. That's not nearly enough to power something like a laptop, but the team says that for a tiny implant that only needs to activate a few key cells, it should be sufficient. Sarpeshkar has written an entire book on ultra-low-power bioelectronics, so it's more than an educated guess. Boosted intelligence better communications with artificial limbs, even a bio- wifi is suggested in amongst the wide claims of what the future of implants can do. The future is no doubt a higher resolution of connectivity between man and machine. But I sometimes think that there will be a hidden cost and as we embrace technology we drift away from one another...

Saturday, 29 September 2012

Hospitals in space, or brief therapy getaways

One time I wrote an article about surviving the hazardous environment in space or long periods of time. But what about the benefits of zero gravity and can be a therapeutic way to heal?. I guess one obvious way is that micro gravity allows freedom of movement, and that people with weak muscles or possible limb injury could one day have the freedom an to move on their own effort.
Despite the fact that 2% of skeletal bone mass is lost during weightlessness in a month, it could be an environment in-which the disabled could be motivated to move with what ever mussels they have. To reduce muscle and bone loss, humans have to exercise for two or more hours every day. It's not just a matter of running on a treadmill or doing some sit ups - odd looking contraptions have been designed to make exercising in zero gravity effective.
The most common problem experienced by humans in the initial hours of weightlessness is known as space adaptation syndrome or SAS, commonly referred to as space sickness. Symptoms of SAS include nausea and vomiting, vertigo, headaches, lethargy, and overall malaise. Roughly 45% of all people who have flown in space have suffered from this condition. The duration of space sickness varies, but in no case has it lasted for more than 72 hours, after which the body adjusts to the new environment.
Astronauts subject to long periods of weightlessness wear pants with elastic bands attached between waistband and cuffs to compress the leg bones and reduce osteopenia. Other significant effects include fluid redistribution (causing the "moon-face" appearance typical of pictures of astronauts in weightlessness), a slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, nasal congestion, sleep disturbance, excess flatulence, and puffiness of the face. These effects begin to reverse quickly upon return to the Earth. In addition, after long space flight missions, male astronauts may experience severe eyesight problems. Such eyesight problems may be a major concern for future deep space flight missions, including a manned mission to the planet Mars.
Liquids and gases, have a tendency to act differently in the microgravity environment of the International Space Station, regardless of their density, or mass. The Binary Colloidal Alloy Test-6 - Phase Separation, or BCAT-6, investigation looks at how liquids and gases separate and come together in microgravity. This helps researchers to understand the fundamental questions about what happens when gases and liquids separate from each other, along with resulting patterns in the way solids are suspended in liquids. The results of this physics investigation may be used in the creation of better formulas and stabilizers to extend shelf life for products, foods, and medicines; and advances in propellant research for future rocket engines.
Also Astrogenetix has entered into a Space Act Agreement (SAA) with NASA that commits to providing the critical resources needed to continue utilizing the International Space Station (ISS) and to further the development of important on-orbit microgravity vaccines and therapeutic drug experiments.
Astrogenetix entered into a similar SAA in 2009 resulting in 12 successful missions on the Space Shuttle that led to the discovery of potential vaccine targets for both salmonella and MRSA. This experience clearly identified that the most important part of the discovery process is the repeated frequency of access to microgravity. The new SAA reflects this important priority and NASA has committed to provide a minimum of 28 missions between 2013 and 2016.
Hygiene issues arise when dealing with low gravity environments. On the International Space Station, there are no showers, and astronauts instead take short sponge baths, with one cloth used to wash, and another used to rinse. Since surface tension causes water and soap bubbles to adhere to the skin, very little water is needed. Special non-rinsing soap is used, as well as special non-rinsing shampoos. Since a flush toilet would not work in low gravity environments, a special toilet was designed, that has suction capability.

Meanwhile researchers from SRI International conducted the first-ever robotic surgery demonstration in a simulated zero-gravity environment. SRI is collaborating with researchers and surgeons from the University of Cincinnati to evaluate the benefits of robotic surgery (or tele-operated robot) on air and space flights. The extreme environment experiments were performed aboard a NASA C-9 aircraft, in simulated microgravity of space.
Blood and bodily fluids cannot be contained in zero gravity, which means there is currently no way to perform surgery in space without contaminating the cabin. This makes an extended stay problematic, says James Antaki at Carnegie Mellon University in Pittsburgh, Pennsylvania.
Antaki is part of a team of US researchers developing an astro-surgical tool that could help. The Aqueous Immersion Surgical System, or AISS, is a transparent box that creates a watertight seal when it is placed over a wound and pumped full of sterile saline solution, says George Pantalos at the University of Louisville in Kentucky.
The saline solution is held under pressure inside the AISS to prevent blood from seeping out of the wound. Airtight holes allow surgeons to access the submerged wound using handheld and orthoscopic instruments. By varying the pressure within the AISS, the device could also be used to siphon up and recycle blood.
"You won't have a blood bank in space, so if there is bleeding you want to save as much blood as you can," says James Burgess, also at Carnegie Mellon, who came up with the concept. Researchers will put the system to the test aboard NASA's zero-gravity C-9 aircraft next week in the first of several experiments planned. They will perform surgery on an artificial coronary system filled with synthetic blood to test its ability to keep blood inside the body and out of the surgeon's field of view. Other experiments are likely to include a sub-orbital flight test, says Pantalos.
In the event of a medical emergency on board the space station, the only current option is to evacuate the astronaut back to Earth. This is not only dangerous for the patient but it is also extremely expensive, says Haidegger. Fortunately, however, no such emergency has yet occurred.
My own conclusion is that, if such concepts of living in space for recreational purposes. It might be possible that medical facilities will be needed for the potential hotels. Considering that space Tourism might draw in a lot of people, health a safety issues needs to be met, and accidents will happen!. Perhaps it's not practical to have medical surgeries in space, due free floating body funds that could short circuit equipment. But as a form of therapy it can be quite beneficial for someone with a injured leg to quickly heal on a zero gravity treadmill, restoring the muscles. Hotels will also equalize the mobility of anyone who chose to stay there if they are disabled or fully functional. Also with advances to zero gravity research it is likely that medicines will be manufactured in space. The research of micro gravity on health is still continuing and will probably help in the run up to a proposed mission to mars, by then technology might have a solution for all our medical needs...

Friday, 28 September 2012

Know your security detectors, including the hidden ones

The technology that goes into detecting people armed with weapons and drugs in  modern Airports are getting more sophisticated. Airport security uses a host of tools from the handheld wands, the step through metal detectors or the Backscatter X-ray machines not to mention possible hidden ones. Metal detectors use  three technologies Very low frequency (VLF), Pulse induction (PI) and Beat-frequency oscillation (BFO).
If the Pulse induction metal detector is over a metal object, the pulse creates an opposite magnetic field in the object. When the pulse's magnetic field collapses, causing the reflected pulse, the magnetic field of the object makes it take longer for the reflected pulse to completely disappear. This process works something like echoes: If you yell in a room with only a few hard surfaces, you probably hear only a very brief echo. This type of metal detector is normally associated with step through detectors.
security sensitive areas have began using a device that scans the whole body for contraband. The scan produces an imagine of the skin, and will show objects like guns, drugs, other weapons under clothing. This procedure is often used an alternative to whole body strip searches. The device uses a low energy x-ray or gamma source to produce the image. X-rays come in many flavors from hard to soft with hard being the most penetrating. All x-rays are ionizing radiation this means the x-ray photons break DNA, chemical bonds by dislodging electrons and generate destructive free radicals. Backscatter x-ray uses the Compton Effect to reflect or scatter x-rays back to detectors.
Backscatter x-ray systems use a complex mechanical assembly to sweep an intense pencil beam of x-rays head to toe. The x-rays leave a trail of damaged DNA and cells from head to toe. Most x-ray are absorbed by the body tissue a small fraction is back-scattered or reflected like a window back to special detectors. The twelve year old fuzzy public images below reveal a great deal. If you inspect the lower leg and arms of the man shown you will see "bone shadows". This is a result of the x-rays penetrating the skin and being absorbed by the underlying bones, the same shadow is seen on the top of the head showing the skull. Fewer x-rays are being "back scattered" from the bone making this region dark in the image. Clearly the x-rays penetrate the skin and reach bones and deeper radio sensitive tissues such as ovary, testicles. thyroid, breast and eyes.

To put the radiation dose received into perspective:
Naturally occurring ionizing radiation is all around us. We are continuously exposed to this background radiation during ordinary living. In 42 minutes of ordinary living, a person receives more radiation from naturally occurring sources than from screening with any general-use x-ray security system.
The national radiation safety standard (see below) sets a dose per screening limit for the general-use category. To meet the requirements of the general-use category a full-body x-ray security system must deliver less than the dose a person receives during four minutes of airline flight. TSA has set their dose limit to ensure a person receives less radiation from one scan with a TSA general-use x-ray security system than from two minutes of airline flight. A person would have to be screened more than a thousand times in one year to exceed the annual radiation dose limit for people screening that has been set by expert radiation safety organizations.

Emerging THz technological applications,THz waves are found between microwaves and infrared on the electromagnetic spectrum. This type of radiation was chosen for security devices because it can penetrate matter such as clothing, wood, paper and other porous material that’s non-conducting. The terahertz spectrum is so good for scanning; infrared cameras can see through some obstacles. The effect is even more pronounced with the lower wavelength terahertz radiation.
This type of radiation seems less threatening because it doesn’t penetrate deeply into the body and is believed to be harmless to both people and animals. THz waves may have applications beyond security devices. Research has been done to determine the feasibility of using the radiation to detect tumors underneath the skin and for analyzing the chemical properties of various materials and compounds. The potential marketplace for THz driven technological applications may generate many billions of dollars in revenue.
The past several years the possible health risks from cumulative exposure to THz waves was mostly dismissed. Experts pointed to THz photons and explained that they are not strong enough to ionize atoms or molecules; nor are they able to break the chains of chemical bonds. They assert—and it is true—that while higher energy photons like ultraviolet rays and X-rays are harmful, the lower energy ones like terahertz waves are basically harmless. While that is true, there are other biophysics at work. Some studies have shown that THZ can cause great genetic harm, while other similar studies have shown no such evidence of deleterious affects.
A company called Genia Photonics has developed a programmable picosecond laser that is capable of spotting trace amounts of a variety of substances. Genia claims that the system can detect explosives, chemical agents, and hazardous biological substances at up to 50 meters. This is why the US Department of Homeland Security is so keen on getting it into airports.
This device relies on classic spectroscopy; just a very advanced form of it. A spectrometer is simply a device that uses radiated energy to characterize a material. In the case of Genia’s scanner, it is using far-infrared radiation in the terahertz band. It’s important to understand that this is not an imaging device, but a tool for reading absorbance spectra (in this case it is a very sensitive electronic nose). Some of the radiation emitted will be absorbed by everything (and everyone) it is directed at. Because different compounds absorb that energy in different ways, the profile of energy returned to the scanner can tell you what it “saw.”
This kind of picosecond laser reads the environment in real-time. That gunpowder residue on your hand from hunting the other day, cannabis smoke particles in your hair, or even a bit of (explosive-boosting) nitrate fertilizer stuck to your shoe could trigger this scanner. Perhaps this invasive technology might replace the pat down option or even the x-ray and the terahertz body scanners. In future this technology could be a key device to detect improvised bomb devices or a suspicious criminals for gun powder or drug traces. Perhaps another gadget for law enforcement in the not so distant future...

Thursday, 27 September 2012

Passive Radar, the modern choice for territory monitoring

In times of conflict my mind dwells on military technology and how it might play its part in the east. The current dispute of land rights of islands has utilized Drones and radar systems which would be in itself an act of aggression. The traditional form of Radar, which uses a transmitter and receiver station can easily be identified also Jammed. If electromagnetic waves traveling through one material meet another, having a very different dielectric constant or diamagnetic constant from the first, the waves will reflect or scatter from the boundary between the materials.
This means that a solid object in air or in a vacuum, or a significant change in atomic density between the object and what is surrounding it, will usually scatter radar (radio) waves from its surface. This is particularly true for electrically conductive materials such as metal and carbon fiber, making radar well-suited to the detection of aircraft and ships. Radar absorbing material, containing resistive and sometimes magnetic substances, is used on military vehicles to reduce radar reflection. This is the radio equivalent of painting something a dark color so that it cannot be seen by the eye at night.
Radar waves scatter in a variety of ways depending on the size (wavelength) of the radio wave and the shape of the target. If the wavelength is much shorter than the target's size, the wave will bounce off in a way similar to the way light is reflected by a mirror. If the wavelength is much longer than the size of the target, the target may not be visible because of poor reflection.
In a passive radar system, there is no dedicated transmitter. Instead, the receiver uses third-party transmitters in the environment, and measures the time difference of arrival between the signal arriving directly from the transmitter and the signal arriving via reflection from the object.
This allows the bistatic range of the object to be determined. In addition to bistatic range, a passive radar will typically also measure the bistatic Doppler shift of the echo and also its direction of arrival. These allow the location, heading and speed of the object to be calculated. In some cases, multiple transmitters and/or receivers can be employed to make several independent measurements of bistatic range, Doppler and bearing and hence significantly improve the final track accuracy.

The rise of cheap computing power and digital receiver technology in the 1980s led to a resurgence of interest in passive radar technology. For the first time, these allowed designers to apply digital signal processing techniques to exploit a variety of broadcast signals and to use cross-correlation techniques to achieve sufficient signal processing gain to detect targets and estimate their bistatic range and Doppler shift. Classified programmes existed in several nations, but the first announcement of a commercial system was by Lockheed-Martin Mission Systems in 1998, with the commercial launch of the Silent Sentry system, that exploited FM radio and analogue television transmitters.
In a conventional radar system, the time of transmission of the pulse and the transmitted waveform are exactly known. This allows the object range to be easily calculated and for a matched filter to be used to achieve an optimal signal-to-noise ratio in the receiver. A passive radar does not have this information directly and hence must use a dedicated receiver channel (known as the "reference channel") to monitor each transmitter being exploited, and dynamically sample the transmitted waveform. A passive radar typically employs the following processing steps:
Detection range can be determined using the standard radar equation, but ensuring proper account of the processing gain and external noise limitations is taken. Furthermore, unlike conventional radar, detection range is also a function of the deployment geometry, as the distance of the receiver from the transmitter determines the level of external noise against which the targets must be detected. However, as a rule of thumb it is reasonable to expect a passive radar using FM radio stations to achieve detection ranges of up to 150 km, for high-power analogue TV and US HDTV stations to achieve detection ranges of over 300 km and for lower power digital signals (such as cell phone and DAB or DVB-T) to achieve detection ranges of a few tens of kilometers. Other systems include Lockheed-Martin's Silent Sentry - exploiting FM radio stations, BAE Systems' CELLDAR - exploiting GSM base stations, Thales Air Systems' Homeland Alerter - FM radio based system and Cassidian multiband passive radar.
The Cassidian multiband system appears to be more superior, their claim to even detect stealth aircraft. At multiple bands, no aircraft will be able to absorb the whole spectrum of electromagnetic frequencies. Once the mobile receive station picks up and digitize incoming signals, the powerful processors will be able distinguish electromagnetic noise from any faint aircraft signals over a wide spectrum.
During a sensitive territorial dispute between two nations, adding defensive systems would escalate a delicate area. Despite the promise of the meetings for Japan and china, it is still too early to conclude that the crisis has passed.

Wednesday, 26 September 2012

Optical Fabrication, a brief description

Stereolithography is an additive manufacturing process which employs a vat of liquid ultraviolet curable photopolymer "resin" and an ultraviolet laser to build parts' layers one at a time. For each layer, the laser beam traces a cross-section of the part pattern on the surface of the liquid resin. Exposure to the ultraviolet laser light cures and solidifies the pattern traced on the resin and joins it to the layer below. After the pattern has been traced, the SLA's elevator platform descends by a distance equal to the thickness of a single layer, typically 0.05 mm to 0.15 mm (0.002" to 0.006").
Then, a resin-filled blade sweeps across the cross section of the part, re-coating it with fresh material. On this new liquid surface, the subsequent layer pattern is traced, joining the previous layer. A complete 3-D part is formed by this process. After being built, parts are immersed in a chemical bath in order to be cleaned of excess resin and are subsequently cured in an ultraviolet oven. Stereolithography requires the use of supporting structures which serve to attach the part to the elevator platform, prevent deflection due to gravity and hold the cross sections in place so that they resist lateral pressure from the re-coater blade. Supports are generated automatically during the preparation of 3D Computer Aided Design models for use on the stereolithography machine, although they may be manipulated manually. Supports must be removed from the finished product manually, unlike in other, less costly, rapid prototyping technologies.
One of the advantages of stereolithography is its speed; functional parts can be manufactured as shortly as within a day. The length of time it takes to produce one particular part depends on the size and complexity of the project and can last from a few hours to more than a day. Most stereolithography machines can produce parts with a maximum size of approximately 50×50×60 cm (20"×20"×24") and some, such as the Mammoth stereolithography machine (which has a build platform of 210×70×80 cm),[8] are capable of producing single parts of more than 2 m in length. Prototypes made by stereolithography are strong enough to be machined and can be used as master patterns for injection molding, thermoforming, blow molding, and various metal casting processes. Although stereolithography can produce a wide variety of shapes, it is often expensive; the cost of photo-curable resin ranges from $80 to $210 per liter, and the cost of stereolithography machines ranges from $100,000 to more than $500,000. However, recent interest and technological advances are bringing the cost of stereolithography machines down in price exponentially.

The term “stereolithography” was coined in 1986 by Charles (Chuck) W. Hull,who patented it as a method and apparatus for making solid objects by successively "printing" thin layers of the ultraviolet curable material one on top of the other. Hull's patent described a concentrated beam of ultraviolet light focused onto the surface of a vat filled with liquid photopolymer. The light beam draws the object onto the surface of the liquid layer by layer, and using polymerization or cross-linking to create a solid, a complex process which requires automation. In 1986, Hull founded the first company to generalize and commercialize this procedure, 3D Systems Inc, which is currently based in Rock Hill, SC. More recently, attempts have been made to construct mathematical models of the stereolithography process and design algorithms to determine whether a proposed object may be constructed by the process.
On September 26, 2012, Formlabs (a company formed by former members of the MIT media lab) released the Form 1 3D printer, which is currently the cheapest 3D printer using the stereolithography process on the market. The printer includes a build envelope of 125 x 125 x 165 mm and a resolution of 300 microns. Formlabs also developed software that takes STL models and allows users to generate "smart" support structures that allow for overhangs and more complex geometries. The kit is currently being sold through Kickstarter for $2299. Depending on the success of the kick-start venture the Form 1 3D printer will be in competition with Fused deposition modelling (FDM), reprap and Makerbot printers.

In comparison with FDM, optical fabrication offers smoother surfaces and finer printing resolution within its print layers. The main disadvantage is that the print material is not interchangeable. While reprap uses a ABS, Polylactic acid with high stiffness properties, the Form 1 printer has a liquid based ultraviolet curable photopolymer "resin". Also the cost of photo-curable resin ranges from $80 to $210 per liter, the main question will be cost over quality. In future there might be two systems leading the manufacturing revolution of 3D printing, but for now the well established fused deposit model is winning, because its simply cheaper.


Tuesday, 25 September 2012

Electric cars, Good news bad news

Electric cars today are at a tipping point almost at the stage where manufactures are wondering if they should introduce a electric alternative or carry on, knowing full well that petrol prices are steadily rising. Battery technology and other costs for the entire lifespan of the electric car needs to be weighed against the petrol car. Also during these times its best to find out any news on the current state of electric cars. One or the fore runners of electric cars is tesla motor with good news.
In Hawthorne, Calif, Tesla Motors introduced its Supercharger, a glittering monolith capable of bringing the battery of a Model S sedan from flat to full in about an hour.
Elon Musk, chief executive of Tesla, made an introduction of the 480-volt Supercharger. Mr. Musk said the chargers would dispense free electricity generated without emissions through a partnership with SolarCity, a builder and installer of photovoltaic equipment led by Peter and Lyndon Rive, cousins of Mr. Musk. The Tesla executive is also SolarCity’s chairman.
The Supercharger will be installed at solar carports loosely resembling filling stations and are capable of charging several vehicles simultaneously, as well as returning surplus power to the grid. Khyati Shah, a spokeswoman for SolarCity, wrote in an e-mail that two of the six Superchargers already installed had solar capability, with the others running off of grid power. One solar unit is 24 kilowatts and the other is 26.
Mr. Musk said the Supercharger network would address some anxieties that might be inhibiting wide consumer adoption of electric vehicles, including concern about power-plant emissions related to charging; the cars’ inability to travel long distances; and operational costs. The Supercharger will charge at 100 kilowatts and eventually up to 120 kilowatts, he said. “What it means is that you can drive for three hours, stop for less than half an hour, recharge, and be ready to go again,” Mr. Musk said. A Model S would reach a state of half-charge in 30 minutes.
The system is not compatible with existing Level III fast chargers. It complements elements of the company’s charging system unveiled earlier, including the high-power wall unit and plug design the company demonstrated for Wheels last year.
Tesla has six Superchargers in operation, all in California, with more to come in the state by the end of the year. The first stations are expected to be opened to the public in coming weeks.
Mr. Musk said the company intended to have Superchargers installed across much of the United States in the next two years and to have the entire country, and the lower part of Canada, covered in four or five years.
The ability to connect to the Supercharger will be standard on Model S cars with the 85-kilowatt-hour battery, the highest-capacity battery marketed by Tesla, and would be optional for buyers of the sedan fitted with the 60-kilowatt-hour pack. That said, Model S sedans equipped with the 40-kilowatt-hour batteries, and the existing fleet of Tesla Roadsters, will be excluded from using the Supercharger.

Bad news Toyota, which had already taken a more conservative view of the market for battery-powered cars than rivals General Motors Co and Nissan Motor Co, said it would only sell about 100 battery-powered eQ vehicles in the United States and Japan in an extremely limited release.
By dropping plans for a second electric vehicle in its line-up, Toyota cast more doubt on an alternative to the combustion engine that has been both lauded for its oil-saving potential and criticized for its heavy reliance on government subsidies in key markets like the United States.
"The current capabilities of electric vehicles do not meet society's needs, whether it may be the distance the cars can run, or the costs, or how it takes a long time to charge," said, Takeshi Uchiyamada, who spearheaded Toyota's development of the Prius hybrid in the 1990s.
Toyota said it was putting its emphasis on that technology, an area in which it is the established leader. Toyota said on Monday it expected to have 21 hybrid gas-electric models like the Prius in its line-up by 2015. Of that total, 14 of the new hybrids will be all-new, the automaker said.
Pure electric vehicles, like the Nissan Leaf, carry only lithium-ion batteries. Consumer demand for the vehicles has been capped by their limited range and the relatively high cost of the powerful batteries they require. The decision to drop plans for more extensive rollout of its eQ city car leaves Toyota with just a single pure EV in its line-up. The automaker will launch an all-electric RAV4 model in the United States that was jointly developed with Tesla Motors. Toyota expects to sell 2,600 of the electric-powered sports utility vehicle over the next three years. By comparison, Toyota sold almost 37,000 Camry sedans in August alone in the United States, the automaker's largest market.
Toyota is also far from its plug-in hybrid sales target. The automaker planned to sell between 35,000 and 40,000 Prius plug-in hybrids in 2012 in Japan. So far it has sold only 8,400, or about 20 percent of its target.
A broad industry consensus sees plug-in cars accounting for only a single-digit percentage of total global sales over the next decade. Nissan remains more bullish, forecasting that by 2020 one-tenth of all cars sold will be electric. Globally, Nissan has sold about 38,000 Leaf electric cars since the vehicle's launch at the end of 2010.
While on the horizon, BMW's Megacity Vehicle is finally taking shape. The electric car well before its 2013 arrival. Until now, BMW's Project i has merely been vague talk about future mobility concepts and electric vehicles for major urban areas.
The Ford Focus electric, has a 6.6 kW battery on the vehicle which renews the depleted charge at almost double speeds as that of the 3.3 kW batteries. The 3.3 kW batteries are used on the Nissan Leaf and it takes seven hours to charge the vehicle, and is also expected sometime next near. Although its price tag is expected to be in the region of £25,000.
New Electric cars from next year will be adding to a changing market, I am unsure if the supercharger refueling infrastructure in the US, will be a universal connection to allow any other cars to charge. I am Hoping that car manufactures like BMW Nissan and Ford can find success in the electric car business,and one day will trickle down to the average consumer.


Monday, 24 September 2012

Eye Tracking, a brief history and potential future...

Eye tracking is the process of measuring either the point of gaze (where one is looking) or the motion of an eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in cognitive linguistics and in product design. There are a number of methods for measuring eye movement. The most popular variant uses video images from which the eye position is extracted. Other methods use search coils or are based on the electrooculogram.
In the 1950s, Alfred L. Yarbus did important eye tracking research and his 1967 book is often quoted. He showed the task given to a subject has a very large influence on the subject's eye movement. He also wrote about the relation between fixations and interest: "All the records ... show conclusively that the character of the eye movement is either completely independent of or only very slightly dependent on the material of the picture and how it was made, provided that it is flat or nearly flat." The cyclical pattern in the examination of pictures "is dependent not only on what is shown on the picture, but also on the problem facing the observer and the information that he hopes to gain from the picture." "Records of eye movements show that the observer's attention is usually held only by certain elements of the picture.... Eye movement reflects the human thought processes; so the observer's thought may be followed to some extent from records of eye movement (the thought accompanying the examination of the particular object).
It is easy to determine from these records which elements attract the observer's eye (and, consequently, his thought), in what order, and how often." In the 1970s, eye tracking research expanded rapidly, particularly reading research. A good overview of the research in this period is given by Rayner. In 1980, Just and Carpenter formulated the influential Strong eye-mind Hypothesis, the hypothesis that "there is no appreciable lag between what is fixated and what is processed". If this hypothesis is correct, then when a subject looks at a word or object, he or she also thinks about (process cognitively), and for exactly as long as the recorded fixation. The hypothesis is often taken for granted by beginning eye tracker researchers.
However, gaze-contingent techniques offer an interesting option in order to disentangle overt and covert attentions, to differentiate what is fixated and what is processed. The 1980s also saw the birth of using eye tracking to answer questions related to human-computer interaction. Specifically, researchers investigated how users search for commands in computer menus. Additionally, computers allowed researchers to use eye-tracking results in real time, primarily to help disabled users.
In the late 1990s, organizations including one of the world’s largest advertising and marketing agency networks EURO RSCG began using eye-tracking technology to measure and study reactions to information on the World Wide Web. For a large number of web designers up until this point, it was assumed that web design should be fashioned off of print and newspaper design. In 2006, British behavioral consultancy research firm Bunnyfoot researched in-game advertising using eye-tracking and physiological data. The study examined how effective advertising was in video games in virtual worlds with digital billboards.
From 2001 till present day, Tobii Technology has been developing eye-tracking technology that both allows disabled users to control devices using only their eyes, as well as helps designers understand how users view websites. Today, eye-tracking is widely used in the scientific community, marketing, and in usability studies. This technology has seemingly become far more popular in the past decade than any other time in history, and is heavily used in developing effective advertising campaigns and usable websites.
Eye-tracking equipment it would normally have cost you upwards of $8,000 would of been beyond the reach for most people. Now, scientists in London have pioneered a device, like the existing technology Tobii PCEye and the EyeTech TM3 , using components anyone of us can buy from the shopping mall. The breakthrough could help millions of people suffering from Multiple Sclerosis, Parkinson's, muscular dystrophy and, potentially, opens the door to a new era of hands-free computers, allowing us to use them without a mouse, keyboard or touch screen.
For the lead researcher Dr Aldo Faisal, a neuroscientist at Imperial College in London, the new device only came about because of his obsession with disassembling gadgets. "I like to play with gadgets and was playing with a popular video-game console," he said. "I hacked it and discovered it was very fast and better than any webcam for movement. Actually, it was so fast that I found we could record eye movement with it." Tracking eye movement is no mean feat. Our eyes moves 10 to 20 times a second, so a standard webcam or even film camera will miss most eye movements and where we are looking. As such, it is perhaps no surprise commercial eye-tracking devices are so expensive.
Luckily for Faisal and his team of researchers, video game console makers have been willing to bulk buy the technology needed to make good enough cameras. They make a loss on the console cameras in the expectation of making it back on accompanying video game sales. "We originally created the device for £39.80 ($64) but recent falls in the price of video game console cameras mean we could now actually make the same device for about £20 ($32)," says Faisal. The eye-tracking device works by first establishing where the eyes are looking, through a relatively straight-forward calibration process. The user puts on the glasses, with the two attached cameras, and stares at a computer screen full of dots. They are then told to look at different dots, with software developed by the researchers working out how the eye looks at each dot.
With simple software and basic mountable cameras a kinetic like hack could bring down prices, already there is a open sourced project for people to try eye tracking for themselves. Eventually market prices might reflect the reachable budget of what an open sourced project would cost. The future of this technology would probably be incorporated in more then just applications in research and computer interfacing. My suspicions is that mobile phone companies and the games industries would want better interfacing and such device would be the next game changer...

Construct your own low cost EYE tracker

Open source software for your Eye tracker