ANTO VARGHESE
Nanomedicine is the medical application of nanotechnology Nanomedicine ranges from the medical applications of nanomaterials, to nanoelectronicbiosenes biosensors, and even possible future applications of molecular nanotechnology. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nano scale materials. One nanometer is one-millionth of a millimeter.
Medical use of nanomaterials
Two forms of nanomedicine that have already been tested in mice and are awaiting human trials are using gold nanoshells to help diagnose and treat cancer, and using liposomes as vaccine adjuvants and as vehicles for drug transport. Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats. A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery.
reeshma ramesan
Magnetic resonance imaging (MRI) devices can scan the inside of the body in intricate detail, allowing clinicians to spot even the earliest signs of cancer or other abnormalities. But they can be a long and uncomfortable experience for patients, requiring them to lie still in the machine for up to 45 minutes.
Now this scan time could be cut to just 15 minutes, thanks to an algorithm developed at MIT's Research Laboratory of Electronics.
MRI scanners use strong magnetic fields and radio waves to produce images of the body. Rather than taking just one scan of a patient, the machines typically acquire a variety of images of the same body part, each designed to create a contrast between different types of tissue. By comparing multiple images of the same region, and studying how the contrasts vary across the different tissue types, radiologists can detect subtle abnormalities such as a developing tumor. But taking multiple scans of the same region in this way is time-consuming, meaning patients must spend long periods inside the machine.
In a paper to be published in the journal Magnetic Resonance in Medicine, researchers led by Elfar Adalsteinsson, an associate professor of electrical engineering and computer science and health sciences and technology, and Vivek Goyal, the Esther and Harold E. Edgerton Career Development Associate Professor of Electrical Engineering and Computer Science, detail an algorithm they have developed to dramatically speed up this process. The algorithm uses information gained from the first contrast scan to help it produce the subsequent images. In this way, the scanner does not have to start from scratch each time it produces a different image from the raw data, but already has a basic outline to work from, considerably shortening the time it takes to acquire each later scan.
To create this outline, the software looks for features that are common to all the different scans, such as the basic anatomical structure, Adalsteinsson says. "If the machine is taking a scan of your brain, your head won't move from one image to the next," he says. "So if scan number two already knows where your head is, then it won't take as long to produce the image as when the data had to be acquired from scratch for the first scan."In particular, the algorithm uses the first scan to predict the likely position of the boundaries between different types of tissue in the subsequent contrast scans. "Given the data from one contrast, it gives you a certain likelihood that a particular edge, say the periphery of the brain or the edges that confine different compartments inside the brain, will be in the same place," Adalsteinsson says.
However, the algorithm cannot impose too much information from the first scan onto the subsequent ones, Goyal says, as this would risk losing the unique tissue features revealed by the different contrasts. "You don't want to presuppose too much," he says. "So you don't assume, for example, that the bright-and-dark pattern from one image will be replicated in the next image, because in fact those kinds of dark and light patterns are often reversed, and can reveal completely different tissue properties."
So for each pixel, the algorithm calculates what new information it needs to construct the image, and what information -- such as the edges of different types of tissue -- it can take from the previous scans, says graduate student and first author Berkin Bilgic.
The result is an MRI scan that is three times quicker to complete, cutting the time patients spend in the machine from 45 to 15 minutes. This faster scan time does have a slight impact on image quality, Bilgic admits, but it is much better than competing algorithms.
AJAY PAUL
Need for Speed: The Run is a racing video game, the 18th title in the Need for Speed franchise, and developed by EA Black Box and published by Electronic Arts. The Wii and 3DS versions were developed by Firebrand Games, the team behind Undercover and Nitro (both DS versions). It was released in North America on November 15, 2011 and November 18, 2011 in Europe.
The game is described as an "illicit, high-stakes race across the country. The only way to get your life back is to be the first from San Francisco to New York. No speed limits. No rules. No allies. All you have are your driving skills and sheer determination".[5]
Producers Jason DeLong and Steve Anthony have stated during an interview that Black Box is aiming to obtain critical acclaim after their last game received universally poor ratings.[6] The Run was in production for three years even though previous Black Box titles have had much shorter development periods.[7]
www.needforspeed.com/therun
Assistant treasurer Axel Martinez said today that Google will invest $94 million into four commercial solar photovoltaic projects alongside private equity firm KKR. Google is investing equity in SunTap Energy, an entity created by KKR for solar investing.
The projects themselves are operated by Recurrent Energy, a company which specializes in commercial and utility solar photovoltaic projects. With a generating capacity of 88 megawatts, they will produce about 160 megawatt-hours per year, or enough to power 13,000 homes.
The solar farms will feed electricity into the grid run by the Sacramento Utility District, which created a feed-in tariff for renewable energy. A feed-in tariff pays a renewable power producer a higher rate than power from conventional sources, a model that's used in Europe. In the U.S., solar is subsidized with a federal tax credit.
Three of the Sacramento projects will be completed in early 2012 with the fourth coming online later in the year, Recurrent Energy said. "This investment is a clear demonstration of solar's ability to attract private capital from well established investors like Google and KKR," Recurrent Energy CEO Arno Harris said in a statement.
Mithun Mathew
Iris (Intelligent Rival Imitator of Siri) is a personal assistant application for Android. The application uses natural language processing to answer questions based on user voice request. Iris currently supports Call, Text, Contact Lookup, and Web Search actions incuding playing videos, looking for: lyrics, movies reviews, recipes, news, weather, places and others. It was developed by Narayan Babu and his team at a Kochi based firm named Dexetra in 8 hours. The name is actually Siri spelled backwards, which is the original application for the same use built by Apple Inc.
Arun Jose
Ultrabooks have sleek and compact design with computing speed that can match computing need of present laptop useRs
"We expect that laptops will be completely replaced by Ultrabooks by 2013," Aurora said.
Nanomedicine is the medical application of nanotechnology Nanomedicine ranges from the medical applications of nanomaterials, to nanoelectronicbiosenes biosensors, and even possible future applications of molecular nanotechnology. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nano scale materials. One nanometer is one-millionth of a millimeter.
Medical use of nanomaterials
Two forms of nanomedicine that have already been tested in mice and are awaiting human trials are using gold nanoshells to help diagnose and treat cancer, and using liposomes as vaccine adjuvants and as vehicles for drug transport. Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats. A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery.
reeshma ramesan
New Algorithm Could Substantially Speed Up MRI Scans,ScienceDaily
Magnetic resonance imaging (MRI) devices can scan the inside of the body in intricate detail, allowing clinicians to spot even the earliest signs of cancer or other abnormalities. But they can be a long and uncomfortable experience for patients, requiring them to lie still in the machine for up to 45 minutes.
Now this scan time could be cut to just 15 minutes, thanks to an algorithm developed at MIT's Research Laboratory of Electronics.
MRI scanners use strong magnetic fields and radio waves to produce images of the body. Rather than taking just one scan of a patient, the machines typically acquire a variety of images of the same body part, each designed to create a contrast between different types of tissue. By comparing multiple images of the same region, and studying how the contrasts vary across the different tissue types, radiologists can detect subtle abnormalities such as a developing tumor. But taking multiple scans of the same region in this way is time-consuming, meaning patients must spend long periods inside the machine.
In a paper to be published in the journal Magnetic Resonance in Medicine, researchers led by Elfar Adalsteinsson, an associate professor of electrical engineering and computer science and health sciences and technology, and Vivek Goyal, the Esther and Harold E. Edgerton Career Development Associate Professor of Electrical Engineering and Computer Science, detail an algorithm they have developed to dramatically speed up this process. The algorithm uses information gained from the first contrast scan to help it produce the subsequent images. In this way, the scanner does not have to start from scratch each time it produces a different image from the raw data, but already has a basic outline to work from, considerably shortening the time it takes to acquire each later scan.
To create this outline, the software looks for features that are common to all the different scans, such as the basic anatomical structure, Adalsteinsson says. "If the machine is taking a scan of your brain, your head won't move from one image to the next," he says. "So if scan number two already knows where your head is, then it won't take as long to produce the image as when the data had to be acquired from scratch for the first scan."In particular, the algorithm uses the first scan to predict the likely position of the boundaries between different types of tissue in the subsequent contrast scans. "Given the data from one contrast, it gives you a certain likelihood that a particular edge, say the periphery of the brain or the edges that confine different compartments inside the brain, will be in the same place," Adalsteinsson says.
However, the algorithm cannot impose too much information from the first scan onto the subsequent ones, Goyal says, as this would risk losing the unique tissue features revealed by the different contrasts. "You don't want to presuppose too much," he says. "So you don't assume, for example, that the bright-and-dark pattern from one image will be replicated in the next image, because in fact those kinds of dark and light patterns are often reversed, and can reveal completely different tissue properties."
So for each pixel, the algorithm calculates what new information it needs to construct the image, and what information -- such as the edges of different types of tissue -- it can take from the previous scans, says graduate student and first author Berkin Bilgic.
The result is an MRI scan that is three times quicker to complete, cutting the time patients spend in the machine from 45 to 15 minutes. This faster scan time does have a slight impact on image quality, Bilgic admits, but it is much better than competing algorithms.
AJAY PAUL
Need for Speed: The Run is a racing video game, the 18th title in the Need for Speed franchise, and developed by EA Black Box and published by Electronic Arts. The Wii and 3DS versions were developed by Firebrand Games, the team behind Undercover and Nitro (both DS versions). It was released in North America on November 15, 2011 and November 18, 2011 in Europe.
The game is described as an "illicit, high-stakes race across the country. The only way to get your life back is to be the first from San Francisco to New York. No speed limits. No rules. No allies. All you have are your driving skills and sheer determination".[5]
Producers Jason DeLong and Steve Anthony have stated during an interview that Black Box is aiming to obtain critical acclaim after their last game received universally poor ratings.[6] The Run was in production for three years even though previous Black Box titles have had much shorter development periods.[7]
www.needforspeed.com/therun
athira
4G Technology
Fourth Generation (4G) mobiles
4G also called as Fourth-Generation Communications System, is a term used to describe the next step in wireless communications. A 4G system can provide a comprehensive IP solution where voice, data and streamed multimedia can be provided to users on an "Anytime, Anywhere" basis. The data transfer rates are also much higher than previous generations.
The main objectives of 4G are:
1)4G will be a fully IP-based integrated system.
2)This will be capable of providing 100 Mbit/s and 1 Gbit/s speeds both indoors and outdoors.
3)It can provide premium quality and high security.
4)4G offer all types of services at an affordable cost.
4G is developed to provide high quality of service (QoS) and rate requirements set by forthcoming applications such as wireless broadband access, Multimedia Messaging, Video Chat, Mobile TV, High definition TV content, DVB, minimal service like voice and data, and other streaming services.
4G technology allow high-quality smooth video transmission. It will enable fast downloading of full-length songs or music pieces in real time.
The business and popularity of 4Gmobiles is predicted to be very vast. On an average, by 2009, this 4Gmobile market will be over $400B and it will dominate the wireless communications, and its converged system will replace most conventional wireless infrastructure.
Data Rates For 4G:
The downloading speed for mobile Internet connections is from 9.6 kbit/s for 2G cellular at present. However, in actual use the data rates are usually slower, especially in crowded areas, or when there is congestion in network.
4G mobile data transmission rates are planned to be up to 20 megabits per second which means that it will be about 10-20 times faster than standard ASDL services.
In terms of connection seeds, 4G will be about 200 times faster than present 2G mobile data rates, and about 10 times faster than 3G broadband mobile. 3G data rates are currently 2Mbit/sec, which is very fast compared to 2G's 9.6Kbit/sec.
4G also called as Fourth-Generation Communications System, is a term used to describe the next step in wireless communications. A 4G system can provide a comprehensive IP solution where voice, data and streamed multimedia can be provided to users on an "Anytime, Anywhere" basis. The data transfer rates are also much higher than previous generations.
The main objectives of 4G are:
1)4G will be a fully IP-based integrated system.
2)This will be capable of providing 100 Mbit/s and 1 Gbit/s speeds both indoors and outdoors.
3)It can provide premium quality and high security.
4)4G offer all types of services at an affordable cost.
4G is developed to provide high quality of service (QoS) and rate requirements set by forthcoming applications such as wireless broadband access, Multimedia Messaging, Video Chat, Mobile TV, High definition TV content, DVB, minimal service like voice and data, and other streaming services.
4G technology allow high-quality smooth video transmission. It will enable fast downloading of full-length songs or music pieces in real time.
The business and popularity of 4Gmobiles is predicted to be very vast. On an average, by 2009, this 4Gmobile market will be over $400B and it will dominate the wireless communications, and its converged system will replace most conventional wireless infrastructure.
Data Rates For 4G:
The downloading speed for mobile Internet connections is from 9.6 kbit/s for 2G cellular at present. However, in actual use the data rates are usually slower, especially in crowded areas, or when there is congestion in network.
4G mobile data transmission rates are planned to be up to 20 megabits per second which means that it will be about 10-20 times faster than standard ASDL services.
In terms of connection seeds, 4G will be about 200 times faster than present 2G mobile data rates, and about 10 times faster than 3G broadband mobile. 3G data rates are currently 2Mbit/sec, which is very fast compared to 2G's 9.6Kbit/sec.
Vivek Kanissery
hi all,
we all have heard of the term cloud computing somewhere or the other..but don't have much details regarding the same..
cloud computing basically means technology that provides user the liberty to store data somewhere on the net..the exact location is something that concerns him the least..as he will be able to access the data as and when needed..the major use of this technology is that there is no much worry about losing data if the person's computer crashes or is stolen..
all that is needed to avail this facility is an internet access..
read more:: http://www.wikinvest.com/ concept/Cloud_Computing
http://en.wikipedia.org/wiki/ Cloud_computing
Johny James we all have heard of the term cloud computing somewhere or the other..but don't have much details regarding the same..
cloud computing basically means technology that provides user the liberty to store data somewhere on the net..the exact location is something that concerns him the least..as he will be able to access the data as and when needed..the major use of this technology is that there is no much worry about losing data if the person's computer crashes or is stolen..
all that is needed to avail this facility is an internet access..
read more:: http://www.wikinvest.com/
http://en.wikipedia.org/wiki/
Assistant treasurer Axel Martinez said today that Google will invest $94 million into four commercial solar photovoltaic projects alongside private equity firm KKR. Google is investing equity in SunTap Energy, an entity created by KKR for solar investing.
The projects themselves are operated by Recurrent Energy, a company which specializes in commercial and utility solar photovoltaic projects. With a generating capacity of 88 megawatts, they will produce about 160 megawatt-hours per year, or enough to power 13,000 homes.
The solar farms will feed electricity into the grid run by the Sacramento Utility District, which created a feed-in tariff for renewable energy. A feed-in tariff pays a renewable power producer a higher rate than power from conventional sources, a model that's used in Europe. In the U.S., solar is subsidized with a federal tax credit.
Three of the Sacramento projects will be completed in early 2012 with the fourth coming online later in the year, Recurrent Energy said. "This investment is a clear demonstration of solar's ability to attract private capital from well established investors like Google and KKR," Recurrent Energy CEO Arno Harris said in a statement.
Mithun Mathew
Iris (Intelligent Rival Imitator of Siri) is a personal assistant application for Android. The application uses natural language processing to answer questions based on user voice request. Iris currently supports Call, Text, Contact Lookup, and Web Search actions incuding playing videos, looking for: lyrics, movies reviews, recipes, news, weather, places and others. It was developed by Narayan Babu and his team at a Kochi based firm named Dexetra in 8 hours. The name is actually Siri spelled backwards, which is the original application for the same use built by Apple Inc.
Arun Jose
Ultrabooks to replace Laptops by 2013, says Intel
NEW DELHI: Electronic chip maker Intel is betting big on evolution of new form factor of computer notebook, UltraBook, and expects it to replace laptops by 2013. "Our focus for next few years is going to be on energy efficiency, security and connectivity. Ultrabook as category was launched this year. This will change face of computing in next two to three years," Intel South Asia's Director of Marketing Sandeep Aurora told reporters here.Ultrabooks have sleek and compact design with computing speed that can match computing need of present laptop useRs
"We expect that laptops will be completely replaced by Ultrabooks by 2013," Aurora said.
No comments:
Post a Comment