Thursday, October 29, 2009

HOW TO REMOVE LOGOFF BUTTON IN START?


First, click the 'Start' button, and select 'Run'. In the resulting dialog box, type "regedit".

Go to [HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\], key and create a new key with the name "Explorer" under "Policies".

Go to [HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\policies\Explorer\] , right-click in the open area to create a DWORD Value with the name "StartMenuLogoff" in "Explorer" key.

Next, right-click on "StartMenuLogoff" and click on 'Modify'.

Finally, enter the value "1" in the text box and click on 'OK'.

I wont be taking resposibility for damages to your system but before you do it just back up the registries and do so if you are facing any diffculty after doing this you can restore it...

ANOTHER WAY TO HIDE A FOLDER?

Hi what you everyone do is click hidden and go to folder options and click hide folder options and if any one click it they can see the folder but there is another way....???
Got confused..
Do you think there is another way?
What do you think?


Yes there is another way to hide a folder..
I am going to show a step by step procedure to hide a folder....
and explain it too
STEP 1:CREATE A NEW FOLDER
and now rename it
erase all the name that it has before
This is the main step you rename it by pressing alt+0160 don't leave alt untill the completion of the number is completed....
Now you can see a new folder with no name actually it not with no name it is with space as name which will happen because we typed the exact code for the space.....
And now the step2..
STEP2:THIS IS THE SECOND STEP
In this step what you have to do is to just go to properties of the folder and go customize tab and click on change icon
and go to the icon with no image on it and click ok and your done when you click ok and click on apply and click ok and now when you try to refresh press F5 constantly you can see your hidden folder now go to properties of the folder and click on hidden and do the step what do go generally to hide a folder
So this is how we can hide our folder and it all done..

 1.


 2.



3.



 4.



5.




6.


  7.

WHAT IS WINDOWS VIENNA?

http://fc09.deviantart.com/fs27/f/2008/071/3/4/Windows_Vienna_Concept_by_Kroanor.jpg

Windows "Vienna" (formerly known as Blackcomb) is Microsoft's codename for a future version of Microsoft Windows, originally announced in February 2000, but has since been subject to major delays and rescheduling.

The code name "Blackcomb" was originally assigned to a version of Windows that was planned to follow Windows XP (codenamed "Whistler"; both named after the Whistler-Blackcomb resort) in both client and server versions. However, in August 2001, the release of Blackcomb was pushed back several years and Vista (originally codenamed "Longhorn" after a bar in the Whistler Blackcomb Resort) was announced as a release between XP and Blackcomb.
Since then, the status of Blackcomb has undergone many alterations and PR manipulations, ranging from Blackcomb being scrapped entirely, to becoming a server-only release. As of 2006, it is still planned as both a client and server release with a current release estimate of anytime between 2009 and 2012, although no firm release date or target has yet been publicized.
                                               WINDOWS VIENNA                                               
WINDOWS XP
http://www.geekgirls.com/images/vista_desktop.jpg
                                                       WINDOWS VISTA

   
                                                      WINDOWS SEVEN
 http://www.istartedsomething.com/wp-content/uploads/2009/04/windows7rc_large.jpg


In January 2006, "Blackcomb" was renamed to "Vienna".
Originally, internal sources pitched Blackcomb as being not just a major revision of Windows, but a complete departure from the way users today typically think about interacting with a computer. While Windows Vista is intended to be a technologies-based release, with some added UI sparkle (in the form of the Windows Aero set of technologies and guidelines), Vienna is targeted directly at revolutionizing the way users of the product interact with their PCs.
For instance, the "Start" philosophy, introduced in Windows 95, may be completely replaced by the "new interface" which was said in 1999 to be scheduled for "Vienna"

(before being moved to Vista ("Longhorn") and then back again to "Vienna").

The Explorer shell will be replaced in its entirety, with features such as the taskbar being replaced by a new concept based on the last 10 years of R&D at the Microsoft "VIBE" research lab. Projects such as GroupBar and LayoutBar are expected to make an appearance, allowing users to more effectively manage and keep track of their applications and documents while in use, and a new way of launching applications is expected—among other ideas, Microsoft is investigating a pie menu-type circular interface, similar in function to the dock in Mac OS X.

Several other features originally planned for Windows Vista may be part of "Vienna", though they may be released independently when they are finished.
"Vienna" will also feature the "sandboxed" approach discussed during the Alpha/White Box development phase for Longhorn. All non-managed code will run in a sandboxed environment where access to the "outside world" is restricted by the operating system. Access to raw sockets will be disabled from within the sandbox, as will direct access to the file system, hardware abstraction layer (HAL), and complete memory addressing. All access to outside applications, files, and protocols will be regulated by the operating system, and any malicious activity will be halted immediately. If this approach is successful, it bodes very well for security and safety, as it is virtually impossible for a malicious application to cause any damage to the system if it is locked in what is effectively a glass box.

Another interesting feature mentioned by Bill Gates is "a pervasive typing line that will recognize the sentence that [the user is] typing in." The implications of this could be as simple as a "complete as you type" function as found in most modern search engines, (e.g. Google Suggest) or as complex as being able to give verbal commands to the PC without any concern for syntax. This former feature has been incorporated to an extent in Windows Vista.

Microsoft has stated that "Vienna" will be available in both 32-bit and 64-bit for the client version, in order to ease the industry's transition from 32-bit to 64-bit computing. Vienna Server is expected to support only 64-bit server systems. There will be continued backward compatibility with 32-bit applications, but 16-bit Windows and MS-DOS applications will not be supported as in Windows Vista 64-bit versions. They are already unsupported in 64-bit versions of XP and Server 2003...

Any doubts please post comments so that i can learn more..

I think by knowing this information you can surely know the difference between the other windows os's

Friday, October 16, 2009

Laptop definition:

Laptop definition: Also called notebooks, laptops are portable computers with integrated monitors, powered by rechargeable batteries. You will say “But that’s not all! They are lightweight, small and you have to close the lid to be able to carry it.” Sorry to disappoint you, but as you will see in a few moments, laptops weren’t always like that; the small dimension and light weight were not actually standards.

Who Invented the Laptop




Laptop Computer Parts




Inside, most laptop computers integrate a motherboard, memory chip, the graphics chip, a hard disk, sound chip, optical drive, the processor, CPU heat sink, a cooling fan, network chip, the battery, and an ExpressCard expansion bay for Firewire, external disk drives, SSD drives, wireless modules, TV tuners, additional memory, and card readers, while being encased in a metal-alloy material that houses also the display in the lid, keyboard and touchpad. 
All laptop computer parts are developed in smaller and smaller dimensions as the manufacturing companies tend to design lighter weight models reduced in size.




First Laptop Created




Who Invented the Laptop

The history of the laptop sends us back in April 1976, when Xerox PARC developed the first portable computer prototype called Xerox NoteTaker. It never reached mass production but it was the inspiration source for the first commercially available notebook. Xerox NoteTaker was designed by Alan Key, Adele Goldberg, Douglas Fairbairn, and Larry Tesler. It featured a built-in monochrome display monitor, 128K 8-bit memory, 1MHz processing speed, a floppy disk drive, foldable keyboard and a mouse, using the Smalltalk operating system. The weight was of 22 Kg and the production costs were incredibly high, reaching $50,000, at the time of introduction.

One of the most important steps in the history of the laptop is the one taken in 1981 by Osborne Computer Corporation that released the world’s first portable computer on the market at a price of $1795. Osborne 1 was called after its designer’s name, Adam Osborne, who created the 24.5-pound laptop powered by a Zilog Z80 processor featuring 4MHz speeds, 64K RAM memory, a 5-inch display with 53 x 24 text resolution, IEEE-488 port configurable as a Parallel printer port, RS-232 compatible 1200 or 300 baud Serial port for use with external modems or serial printers, modem, dual 5-1/4 inches 91K drives for storage, single sided, single density floppy disk drives, and the CP/M operating system.



Who Invented the Laptop



The first portable computer could be closed up and was coming with a carrying handle, and optional battery pack. It was capable to display maximum 52 characters per line but the user had the possibility to scroll back and forth using the cursor keys, and read up to 128 characters.
It was also the world’s first computer coming with software bundle, priced at $1500, which included a CP/M utility, SuperCalc spreadsheet, WordStar word processing with MailMerge, Microsoft MBASIC programming language, and Digital Research CBASIC programming language.

Who Invented the Laptop

(Sources: computerhistory ; About.com ; Ligon GT Magnet Middle School ; howstuffworks)



Laptop vs Desktop Computers




It’s clear that if you use a computer mostly in an office or at home you should stay on the desktop PC and get a notebook only if you have to use the applications while on the go. I am saying this because, as you already know, laptops are not powerful than desktop PCs when we take into account the processing speeds and graphic capabilities. The portability, light weight and the slim keyboard are the laptop benefits, but if we discuss about the disadvantages of laptops, the first problem we meet is the upgradeability level, which is way lower than in the desktops’ case. This is because you can’t upgrade the entire notebook configuration and if you replace just some parts they won’t be compatible with the ones existent anymore.

http://www.laptopsarena.com/

Motherboards, keyboards and batteries are proprietary in design and only the original manufacturer can replace them. You can interchange the hard drive and the memory.
Other disadvantages include the high costs of the components, poor ergonomics, easiness to be stolen, and the integration of the keyboard on which you could accidentally spill liquids on, damaging this way the motherboard and the display. To change one of these 2 broken parts would cost you as much as a brand new laptop.

In conclusion, laptops provide more convenience for mobile use but have raised maintenance costs.

The Curious Case of John Titor - Time Traveller

Early one morning, a man who identified himself as John Titor posted a message on the forums of the Time Travel Institute, a website "dedicated to research and exploration of the temporal sciences." Titor said he had returned from the year 2036, and that he was a survivor of a civil war and nuclear attack. He had been sent back in time to retrieve an IBM 5100. That computer, a primordial desktop PC released in 1975, supposedly had some key to solving a future crisis.

Titor methodically listed the parts required for what he called "gravity distortion system." His grandfather, he claimed, had worked on such a machine, and he was on his way back to 1975 to find him. Titor’s messages from continued for a few months, then he claimed he had to return to 2036 for good. That was weird. But then something even weirder happened. The followers online couldn't let Titor go. Now this is the Sasquatch of Generation Net. It's a real-life version of a new kind of game - the alternate reality game, which sends surfers down reality-blurring rabbit holes.

Geeks began unearthing strange facts about the IBM 5100s. Obsessives launched Titor sites, stitching together his postings. They created timelines, charts...they even held conventions. It was cited many times by Art Bell, the host of the paranormal radio show Coast to Coast. Ultimately, an Italian TV show hired a private eye to go on Titor's trail. The gumshoe ended up at the doorstep of a flashy entertainment lawyer living in the Disney utopia of Celebration, Florida. The lawyer claims to be merely "representing" Titor, but some online think that he or his teenage hacker son made up the whole thing. No matter. Titor has legs. There are now Titor books, websites, fan clubs, merchandise... even a stage play. The most compelling question of all isn't whether Titor exists. It's why a story so ludicrous would seize the imaginations of so many people online.


for more info look in
http://www.johntitor.com/

Augmented Reality in a Contact Lens

A new generation of contact lenses built with very small circuits and LEDs promises bionic eyesight
The human eye is a perceptual powerhouse. It can see millions of colors, adjust easily to shifting light conditions, and transmit information to the brain at a rate exceeding that of a high-speed Internet connection.



Image: Raygun Studio

But why stop there?

In the Terminator movies, Arnold Schwarzenegger’s character sees the world with data superimposed on his visual field—virtual captions that enhance the cyborg’s scan of a scene. In stories by the science fiction author Vernor Vinge, characters rely on electronic contact lenses, rather than smartphones or brain implants, for seamless access to information that appears right before their eyes.

These visions (if I may) might seem far-fetched, but a contact lens with simple built-in electronics is already within reach; in fact, my students and I are already producing such devices in small numbers in my laboratory at the University of Washington, in Seattle [see sidebar, "A Twinkle in the Eye"]. These lenses don’t give us the vision of an eagle or the benefit of running subtitles on our surroundings yet. But we have built a lens with one LED, which we’ve powered wirelessly with RF. What we’ve done so far barely hints at what will soon be possible with this technology.

Photos: University of Washington


Conventional contact lenses are polymers formed in specific shapes to correct faulty vision. To turn such a lens into a functional system, we integrate control circuits, communication circuits, and miniature antennas into the lens using custom-built optoelectronic components. Those components will eventually include hundreds of LEDs, which will form images in front of the eye, such as words, charts, and photographs. Much of the hardware is semitransparent so that wearers can navigate their surroundings without crashing into them or becoming disoriented. In all likelihood, a separate, portable device will relay displayable information to the lens’s control circuit, which will operate the optoelectronics in the lens.

These lenses don’t need to be very complex to be useful. Even a lens with a single pixel could aid people with impaired hearing or be incorporated as an indicator into computer games. With more colors and resolution, the repertoire could be expanded to include displaying text, translating speech into captions in real time, or offering visual cues from a navigation system. With basic image processing and Internet access, a contact-lens display could unlock whole new worlds of visual information, unfettered by the constraints of a physical display.

Besides visual enhancement, noninvasive monitoring of the wearer’s biomarkers and health indicators could be a huge future market. We’ve built several simple sensors that can detect the concentration of a molecule, such as glucose. Sensors built onto lenses would let diabetic wearers keep tabs on blood-sugar levels without needing to prick a finger. The glucose detectors we’re evaluating now are a mere glimmer of what will be possible in the next 5 to 10 years. Contact lenses are worn daily by more than a hundred million people, and they are one of the only disposable, mass-market products that remain in contact, through fluids, with the interior of the body for an extended period of time. When you get a blood test, your doctor is probably measuring many of the same biomarkers that are found in the live cells on the surface of your eye—and in concentrations that correlate closely with the levels in your bloodstream. An appropriately configured contact lens could monitor cholesterol, sodium, and potassium levels, to name a few potential targets. Coupled with a wireless data transmitter, the lens could relay information to medics or nurses instantly, without needles or laboratory chemistry, and with a much lower chance of mix-ups.

Three fundamental challenges stand in the way of building a multipurpose contact lens. First, the processes for making many of the lens’s parts and subsystems are incompatible with one another and with the fragile polymer of the lens. To get around this problem, my colleagues and I make all our devices from scratch. To fabricate the components for silicon circuits and LEDs, we use high temperatures and corrosive chemicals, which means we can’t manufacture them directly onto a lens. That leads to the second challenge, which is that all the key components of the lens need to be miniaturized and integrated onto about 1.5 square centimeters of a flexible, transparent polymer. We haven’t fully solved that problem yet, but we have so far developed our own specialized assembly process, which enables us to integrate several different kinds of components onto a lens. Last but not least, the whole contraption needs to be completely safe for the eye. Take an LED, for example. Most red LEDs are made of aluminum gallium arsenide, which is toxic. So before an LED can go into the eye, it must be enveloped in a biocompatible substance.

So far, besides our glucose monitor, we’ve been able to batch-fabricate a few other nanoscale biosensors that respond to a target molecule with an electrical signal; we’ve also made several microscale components, including single-crystal silicon transistors, radio chips, antennas, diffusion resistors, LEDs, and silicon photodetectors. We’ve constructed all the micrometer-scale metal interconnects necessary to form a circuit on a contact lens. We’ve also shown that these microcomponents can be integrated through a self-assembly process onto other unconventional substrates, such as thin, flexible transparent plastics or glass. We’ve fabricated prototype lenses with an LED, a small radio chip, and an antenna, and we’ve transmitted energy to the lens wirelessly, lighting the LED. To demonstrate that the lenses can be safe, we encapsulated them in a biocompatible polymer and successfully tested them in trials with live rabbits.




Photos: University of Washington

Second Sight:
In recent trials, rabbits wore lenses containing metal circuit structures for 20 minutes at a time with no adverse effects.



Seeing the light—LED light—is a reasonable accomplishment. But seeing something useful through the lens is clearly the ultimate goal. Fortunately, the human eye is an extremely sensitive photodetector. At high noon on a cloudless day, lots of light streams through your pupil, and the world appears bright indeed. But the eye doesn’t need all that optical power—it can perceive images with only a few microwatts of optical power passing through its lens. An LCD computer screen is similarly wasteful. It sends out a lot of photons, but only a small fraction of them enter your eye and hit the retina to form an image. But when the display is directly over your cornea, every photon generated by the display helps form the image.

The beauty of this approach is obvious: With the light coming from a lens on your pupil rather than from an external source, you need much less power to form an image. But how to get light from a lens? We’ve considered two basic approaches. One option is to build into the lens a display based on an array of LED pixels; we call this an active display. An alternative is to use passive pixels that merely modulate incoming light rather than producing their own. Basically, they construct an image by changing their color and transparency in reaction to a light source. (They’re similar to LCDs, in which tiny liquid-crystal ”shutters” block or transmit white light through a red, green, or blue filter.) For passive pixels on a functional contact lens, the light source would be the environment. The colors wouldn’t be as precise as with a white-backlit LCD, but the images could be quite sharp and finely resolved.

We’ve mainly pursued the active approach and have produced lenses that can accommodate an 8-by-8 array of LEDs. For now, active pixels are easier to attach to lenses. But using passive pixels would significantly reduce the contact’s overall power needs—if we can figure out how to make the pixels smaller, higher in contrast, and capable of reacting quickly to external signals.

By now you’re probably wondering how a person wearing one of our contact lenses would be able to focus on an image generated on the surface of the eye. After all, a normal and healthy eye cannot focus on objects that are fewer than 10 centimeters from the corneal surface. The LEDs by themselves merely produce a fuzzy splotch of color in the wearer’s field of vision. Somehow the image must be pushed away from the cornea. One way to do that is to employ an array of even smaller lenses placed on the surface of the contact lens. Arrays of such microlenses have been used in the past to focus lasers and, in photolithography, to draw patterns of light on a photoresist. On a contact lens, each pixel or small group of pixels would be assigned to a microlens placed between the eye and the pixels. Spacing a pixel and a microlens 360 micrometers apart would be enough to push back the virtual image and let the eye focus on it easily. To the wearer, the image would seem to hang in space about half a meter away, depending on the microlens.

Another way to make sharp images is to use a scanning microlaser or an array of microlasers. Laser beams diverge much less than LED light does, so they would produce a sharper image. A kind of actuated mirror would scan the beams from a red, a green, and a blue laser to generate an image. The resolution of the image would be limited primarily by the narrowness of the beams, and the lasers would obviously have to be extremely small, which would be a substantial challenge. However, using lasers would ensure that the image is in focus at all times and eliminate the need for microlenses.

Whether we use LEDs or lasers for our display, the area available for optoelectronics on the surface of the contact is really small: roughly 1.2 millimeters in diameter. The display must also be semitransparent, so that wearers can still see their surroundings. Those are tough but not impossible requirements. The LED chips we’ve built so far are 300 µm in diameter, and the light-emitting zone on each chip is a 60-µm-wide ring with a radius of 112 µm. We’re trying to reduce that by an order of magnitude. Our goal is an array of 3600 10-µm-wide pixels spaced 10 µm apart.

One other difficulty in putting a display on the eye is keeping it from moving around relative to the pupil. Normal contact lenses that correct for astigmatism are weighted on the bottom to maintain a specific orientation, give or take a few degrees. I figure the same technique could keep a display from tilting (unless the wearer blinked too often!).

Like all mobile electronics, these lenses must be powered by suitable sources, but among the options, none are particularly attractive. The space constraints are acute. For example, batteries are hard to miniaturize to this extent, require recharging, and raise the specter of, say, lithium ions floating around in the eye after an accident. A better strategy is gathering inertial power from the environment, by converting ambient vibrations into energy or by receiving solar or RF power. Most inertial power scavenging designs have unacceptably low power output, so we have focused on powering our lenses with solar or RF energy.

Let’s assume that 1 square centimeter of lens area is dedicated to power generation, and let’s say we devote the space to solar cells. Almost 300 microwatts of incoming power would be available indoors, with potentially much more available outdoors. At a conversion efficiency of 10 percent, these figures would translate to 30 µW of available electrical power, if all the subsystems of the contact lens were run indoors.

Collecting RF energy from a source in the user’s pocket would improve the numbers slightly. In this setup, the lens area would hold antennas rather than photovoltaic cells. The antennas’ output would be limited by the field strengths permitted at various frequencies. In the microwave bands between 1.5 gigahertz and 100 GHz, the exposure level considered safe for humans is 1 milliwatt per square centimeter. For our prototypes, we have fabricated the first generation of antennas that can transmit in the 900-megahertz to 6-GHz range, and we’re working on higher-efficiency versions. So from that one square centimeter of lens real estate, we should be able to extract at least 100 µW, depending on the efficiency of the antenna and the conversion circuit.

Having made all these subsystems work, the final challenge is making them all fit on the same tiny polymer disc. Recall the pieces that we need to cram onto a lens: metal microstructures to form antennas; compound semiconductors to make optoelectronic devices; advanced complementary metal-oxide-semiconductor silicon circuits for low-power control and RF telecommunication; microelectromechanical system (MEMS) transducers and resonators to tune the frequencies of the RF communication; and surface sensors that are reactive with the biochemical environment.

The semiconductor fabrication processes we’d typically use to make most of these components won’t work because they are both thermally and chemically incompatible with the flexible polymer substrate of the contact lens. To get around this problem, we independently fabricate most of the microcomponents on silicon-on-insulator wafers, and we fabricate the LEDs and some of the biosensors on other substrates. Each part has metal interconnects and is etched into a unique shape. The end yield is a collection of powder-fine parts that we then embed in the lens.

We start by preparing the substrate that will hold the microcomponents, a 100-µm-thick slice of polyethylene terephthalate. The substrate has photolithographically defined metal interconnect lines and binding sites. These binding sites are tiny wells, about 10 µm deep, where electrical connections will be made between components and the template. At the bottom of each well is a minuscule pool of a low-melting-point alloy that will later join together two interconnects in what amounts to micrometer-scale soldering.

We then submerge the plastic lens substrate in a liquid medium and flow the collection of microcomponents over it. The binding sites are cut to match the geometries of the individual parts so that a triangular component finds a triangular well, a circular part falls into a circular well, and so on. When a piece falls into its complementary well, a small metal pad on the surface of the component comes in contact with the alloy at the bottom of the well, causing a capillary force that lodges the component in place. After all the parts have found their slots, we drop the temperature to solidify the alloy. This step locks in the mechanical and electrical contact between the components, the interconnects, and the substrate.

The next step is to ensure that all the potentially harmful components that we’ve just assembled are completely safe and comfortable to wear. The lenses we’ve been developing resemble existing gas-permeable contacts with small patches of a slightly less breathable material that wraps around the electronic components. We’ve been encapsulating the functional parts with poly(methyl methacrylate), the polymer used to make earlier generations of contact lenses. Then there’s the question of the interaction of heat and light with the eye. Not only must the system’s power consumption be very low for the sake of the energy budget, it must also avoid generating enough heat to damage the eye, so the temperature must remain below 45 °C. We have yet to investigate this concern fully, but our preliminary analyses suggest that heat shouldn’t be a big problem.


eye04
Photos: University of Washington

eye04
Photos: University of Washington

In Focus:
One lens prototype [left] has several interconnects, single-crystal silicon components, and compound-semiconductor components embedded within. Another sample lens [right] contains a radio chip, an antenna, and a red LED.



All the basic technologies needed to build functional contact lenses are in place. We’ve tested our first few prototypes on animals, proving that the platform can be safe. What we need to do now is show all the subsystems working together, shrink some of the components even more, and extend the RF power harvesting to higher efficiencies and to distances greater than the few centimeters we have now. We also need to build a companion device that would do all the necessary computing or image processing to truly prove that the system can form images on demand. We’re starting with a simple product, a contact lens with a single light source, and we aim to work up to more sophisticated lenses that can superimpose computer-generated high-resolution color graphics on a user’s real field of vision.

The true promise of this research is not just the actual system we end up making, whether it’s a display, a biosensor, or both. We already see a future in which the humble contact lens becomes a real platform, like the iPhone is today, with lots of developers contributing their ideas and inventions. As far as we’re concerned, the possibilities extend as far as the eye can see, and beyond.

The author would like to thank his past and present students and collaborators, especially Brian Otis, Desney Tan, and Tueng Shen, for their contributions to this research.
About the Author

Babak A. Parviz wakes up every morning and sticks a small piece of polymer in each eye. So it was only a matter of time before this bionanotechnology expert at the University of Washington, in Seattle, imagined contact lenses with built-in circuits and LEDs. “It’s really fun to hook things up and see how they might work,” he says. In “For Your Eye Only”, Parviz previews a contact lens for the 21st century.
To Probe Further

You can find details about the fabrication process using self-assembly in “Self-Assembled Single-Crystal Silicon Circuits on Plastic,” by Sean A. Stauth and Babak A. Parviz, in Proceedings of the National Academy of Sciences, 19 September 2006.

Monday, October 12, 2009

How to make your own orkut Song scrap?

Why should we always search for orkut scrap songs..Why can't we do our own scrap song
with our own images included in it and not only images or even the song you like.
But I general you can't find all the orkut scrap songs all the time but you know you can find it in the websites But you need to find the song which you can hear online.
And the extention of the website must only be ".mp3" or even some song files only.If you open that page then you should only hear the song with quick time player like this.


This is one method

[Capture.JPG]

see the image in


So here is only one way there are many but sorry i am not perfect in it....
Just know a little which i would like to share...

the lines which are highlightened need a link to be added

Saturday, October 10, 2009

Japanese catwalk robot unveiled

Japan is great in it's creation of the latest technologies may be they will be the first one to release the different things that will not involve humans in any work that are presently done.
They are maniacs they always invent something.I think the present all the models will be replace by this robots invented by japanese.
The female humanoid with slightly oversized eyes, a tiny nose and shoulder-length hair boasts 42 motion motors programmed to mimic the movements of flesh-and-blood fashion models.
"Hello everybody, I am cybernetic human HRP-4C," said the futuristic fashionista, opening her media premiere at the National Institute of Advanced Industrial Science and Technology outside Tokyo.
The fashion-bot is 5ft 2 ins, the average height of young Japanese women, but weighs in at a waiflike 95 pounds (43 kilos) – including batteries.
Appearing before photographers and television crews, the seductive cyborg struck poses, flashed smiles and pouted sulkily according to commands transmitted wirelessly via bluetooth devices.
The performance fell short of flawless when she occasionally mixed up her facial expressions – a mistake the inventors put down to a case of the nerves as a hail of camera shutters confused her sound recognition sensors.
She has a slightly manga-inspired human face but a silver metallic body.
"If we had made the robot too similar to a real human, it would have been uncanny," said one of the inventors, humanoid research leader Shuji Kajita. "We have deliberately leant toward an anime style."
The institute said the robot "has been developed mainly for use in the entertainment industry" but is not for sale at the moment.
"We unveiled this to attract attention in society," said Junji Ito, a senior official at the institute, who said he saw the HRP-4C as a stepping stone toward creating a humanoid industry.
"It's important that people feel good about humanoids and want to work with them," he said. "We shifted from a dry mechanical image to a very human image."
The preview was a warm-up for the robot's appearance at a Tokyo fashion show on March 23.
Like her real-life counterparts, HRP-4C commands a hefty price – the institute said developing and building her cost more than 200 million yen (£1.4 million).
Hirohisa Hirukawa, another researcher, said the institute hoped to commercialise the humanoid in future.
They are even trying to provide mostly all the human motions to it and I think they will do it so well .It looks so real for the face.And the program of the robot will be available to all the people who buy it so that they can make their own program of it.I think the product may be dangerous .And if it can be programmed to do anything then it can be programmed to kill

Surface Computing

This is kind of New Technology that will rule the world in the later days.Now a days we use the keyboard or mouse for operating or even touch screen.But just imagine a system at any place even on the table and the table will have sense and it will feel it and does the work.When touched.But its not the touch screen.It needs a monitor .But surface computing does not need it.It will work anywhere mostly if the apparatus.I am not a kind of expert about it.I know only a little which i want to share.
Presently Microsoft Surface is the venture that is presently highly used but as it is very expensive for business it is less used.The Mindstorm developed the product and named it
Mindstorm took part in this year’s X Factor marking the first time multi touch interactive surfaces have taken prime place on a widely popular TV show. The X Factor found Mindstorm due to our involvement with ITV in the past in shows such as The Krypton Factor.
We had proved that our technology is robust and reliable which is key to avoid wasting expensive production minutes setting up or fixing less reliable technology products. The X Factor was also after a piece of iconic furniture design that would complement the high quality production and create an iconic scene. The Mindstorm Aurora table impressed them and the decision was quickly made that we should collaborate.

The X Factor judges, The Aurora table and a few proud Mindstorm employees


It was obvious that the place for the Aurora table would be in the final selection of candidates during the boot camp. Normally the judges would gather round a table with printed photos on but it quickly became clear that the Mindstorm Aurora multi t
ouch table would be able to completely transform this experience.
We worked closely with the X Factor team to create an application that would reflect their brand. On the day of shooting, we had a lot of fun showing all the other applications the Aurora table is capable of to the X Factor production team, as well as to all the contestants. The table quickly became affectionately known as ‘The Magic table’.
The actual candidate selection process using the Aurora table was a great success and the judges spent almost 3 hours discussing, debating and moving candidates around the interactive table.
Overall it was a fantastic success and great fun as well! We are now looking forward to other future collaborations with the X Factor.