advertisements

Monday 30 July 2018

Artificial intelligence can predict your personality ... simply by tracking your eyes

 Artificial intelligence can predict your personality ... simply by tracking your eyes:-




It's often been said that the eyes are the window to the soul, revealing what we think and how we feel. Now, new research reveals that your eyes may also be an indicator of your personality type, simply by the way they move. advertisement  Developed by the University of South Australia in partnership with the University of Stuttgart, Flinders University and the Max Planck Institute for Informatics in Germany, the research uses state-of-the-art machine-learning algorithms to demonstrate a link between personality and eye movements.  Findings show that people's eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the Big Five personality traits: neuroticism, extroversion, agreeableness, and conscientiousness.  Researchers tracked the eye movements of 42 participants as they undertook everyday tasks around a university campus, and subsequently assessed their personality traits using well-established questionnaires.  UniSA's Dr Tobias Loetscher says the study provides new links between previously under-investigated eye movements and personality traits and delivers important insights for emerging fields of social signal processing and social robotics.  "There's certainly the potential for these findings to improve human-machine interactions," Dr Loetscher says.  "People are always looking for improved, personalised services. However, today's robots and computers are not socially aware, so they cannot adapt to non-verbal cues.  "This research provides opportunities to develop robots and computers so that they can become more natural, and better at interpreting human social signals."  Dr Loetscher says the findings also provide an important bridge between tightly controlled laboratory studies and the study of natural eye movements in real-world environments.  "This research has tracked and measured the visual behaviour of people going about their everyday tasks, providing more natural responses than if they were in a lab.  "And thanks to our machine-learning approach, we not only validate the role of personality in explaining eye movement in everyday life, but also reveal new eye movement characteristics as predictors of personality traits."


News Source :- sciencedaily.com

Optical neural network demo(30/7/2018)

Optical neural network demo :-
       





Researchers at the National Institute of Standards and Technology (NIST) have made a silicon chip that distributes optical signals precisely across a miniature brain-like grid, showcasing a potential new design for neural networks.
advertisement

The human brain has billions of neurons (nerve cells), each with thousands of connections to other neurons. Many computing research projects aim to emulate the brain by creating circuits of artificial neural networks. But conventional electronics, including the electrical wiring of semiconductor circuits, often impedes the extremely complex routing required for useful neural networks.

The NIST team proposes to use light instead of electricity as a signaling medium. Neural networks already have demonstrated remarkable power in solving complex problems, including rapid pattern recognition and data analysis. The use of light would eliminate interference due to electrical charge and the signals would travel faster and farther.

"Light's advantages could improve the performance of neural nets for scientific data analysis such as searches for Earth-like planets and quantum information science, and accelerate the development of highly intuitive control systems for autonomous vehicles," NIST physicist Jeff Chiles said.

A conventional computer processes information through algorithms, or human-coded rules. By contrast, a neural network relies on a network of connections among processing elements, or neurons, which can be trained to recognize certain patterns of stimuli. A neural or neuromorphic computer would consist of a large, complex system of neural networks.

Described in a new paper, the NIST chip overcomes a major challenge to the use of light signals by vertically stacking two layers of photonic waveguides -- structures that confine light into narrow lines for routing optical signals, much as wires route electrical signals. This three-dimensional (3D) design enables complex routing schemes, which are necessary to mimic neural systems. Furthermore, this design can easily be extended to incorporate additional waveguiding layers when needed for more complex networks.

The stacked waveguides form a three-dimensional grid with 10 inputs or "upstream" neurons each connecting to 10 outputs or "downstream" neurons, for a total of 100 receivers. Fabricated on a silicon wafer, the waveguides are made of silicon nitride and are each 800 nanometers (nm) wide and 400 nm thick. Researchers created software to automatically generate signal routing, with adjustable levels of connectivity between the neurons.

Laser light was directed into the chip through an optical fiber. The goal was to route each input to every output group, following a selected distribution pattern for light intensity or power. Power levels represent the pattern and degree of connectivity in the circuit. The authors demonstrated two schemes for controlling output intensity: uniform (each output receives the same power) and a "bell curve" distribution (in which middle neurons receive the most power, while peripheral neurons receive less).

To evaluate the results, researchers made images of the output signals. All signals were focused through a microscope lens onto a semiconductor sensor and processed into image frames. This method allows many devices to be analyzed at the same time with high precision. The output was highly uniform, with low error rates, confirming precise power distribution.

"We've really done two things here," Chiles said. "We've begun to use the third dimension to enable more optical connectivity, and we've developed a new measurement technique to rapidly characterize many devices in a photonic system. Both advances are crucial as we begin to scale up to massive optoelectronic neural systems."


News source :- sciencedaily.com
  

Tuesday 24 October 2017

Daydreaming' brain network helps us perform routine tasks

Daydreaming' brain network helps us perform routine tasks

LONDON: The brain network involved in daydreaming plays an important role in allowing us to perform routine tasks efficiently, without investing too much time and energy, a study has found.Scientists at the University of Cambridge in the UK showed that far from being just 'background activity', the 'default mode network' involved in daydreaming may be essential to helping us perform tasks on autopilot.The findings have relevance to brain injury, particularly following traumatic brain injury, where problems with memory and impulsivity can substantially compromise social reintegration.They may also have relevance for mental health disorders, such as addiction, depression and obsessive compulsive disorder, where particular thought patterns drive repeated behaviours, and the mechanisms of anaesthetic agents and other drugs on the brain.Previously, scientists at the Washington University School of Medicine had found that a collection of brain regions appeared to be more active during such states of rest.This network was named the 'default mode network' (DMN). While it has since been linked to, among other things, daydreaming, thinking about the past, planning for the future, and creativity, its precise function is unclear.In the new research published in the journal Proceedings of National Academy of Sciences, scientists showed that the DMN plays an important role in allowing us to switch to 'autopilot' once we are familiar with a task.In the study, 28 volunteers took part in a task while lying inside a magnetic resonance imaging (MRI) scanner. Functional MRI (fMRI) measures changes in brain oxygen levels as a proxy for neural activity.Participants were shown four cards and asked to match a target card to one of these cards.There were three possible rules - matching by colour, shape or number. Volunteers were not told the rule, but rather had to work it out for themselves through trial and error.The most interesting differences in brain activity occurred when comparing the two stages of the task - acquisition (where the participants were learning the rules by trial and error) and application (where the participants had learned the rule and were now applying it).During the acquisition stage, the dorsal attention network, which has been associated with the processing of attention-demanding information, was more active.However, in the application stage, where participants utilised learned rules from memory, the DMN was more active.In this stage, the stronger the relationship between activity in the DMN and in regions of the brain associated with memory, such as the hippocampus, the faster and more accurately the volunteer was able to perform the task.This suggested that during the application stage, the participants could efficiently respond to the task using the rule from memory."Rather than waiting passively for things to happen to us, we are constantly trying to predict the environment around us," said Deniz Vatansever, former student at the University of Cambridge."Our evidence suggests it is the default mode network that enables us do this. It is essentially like an autopilot that helps us make fast decisions when we know what the rules of the environment are," said Vatansever, who is now based at the University of York."So for example, when you are driving to work in the morning along a familiar route, the default mode network will be active, enabling us to perform our task without having to invest lots of time and energy into every decision," he said.

Monday 23 October 2017

Isro to launch Cartosat 2 sat with 30 nano sats in mid-December

Isro to launch Cartosat 2 sat with 30 nano sats in mid-December


BTOIGET APPSCIENCEAAIsro to launch Cartosat 2 sat with 30 nano sats in mid-DecemberSurendra Singh | TNN | Updated: Oct 23, 2017, 12:10 IST WhatsappFacebookGoogle PlusTwitterEmailLinkedinSMSSHAREHIGHLIGHTSIsro will be busy in launching a series of satellites from December onwards.Its targeting to launch Cartosat along with 30 nano satellites of foreign countries in the second half of December.The replacement satellite for IRNSS-1A will be launched soon thereafter.NEW DELHI: After the unsuccessful launch of navigation satellite IRNSS-1H, Indian Space Research Organisation (Isro) is gearing up to launch a remote sensing satellite of Cartosat-2 series along with 30 nano satellites of foreign countries in the second half of December.Vikram Sarabhai Space Centre (VSSC) director Dr K Sivan said, "Isro will be busy in launching a series of satellites from December onwards. We are targeting to launch Cartosat along with 30 nano satellites of foreign countries in the second half of December."He said, "The replacement satellite for IRNSS-1A (the first navigation satellite whose three atomic clocks, meant to provide precise locational data, had stopped working last year) will be launched soon thereafter. Both these launches will be from the first launchpad at Sriharikota as the second launchpad will be busy in launching three GSLV rockets, including the Chandrayaan-2 mission in March. "If for any reason, Cartosat launch is delayed in December, it will also stall the launch of replacement satellite IRNSS-1I as both these launches have been planned from the first launchpad."The three GSLV launches, which will involve two GSLV Mk II and one GSLV Mk III (Isro's fat boy), will be from the second launchpad at Sriharikota. Instead of PSLV (which was used for launching Chandrayaan-1 mission in 2008), Isro is using GSLV Mk II for the second lunar mission as the payload is heavier this time (combined launch mass 3,250 kg). The payload will constitute orbiter, lander and rover.Dr Sivan said, "After the IRNSS-1H satellite failure, corrective measures will be taken in all rockets before the launches." Though the inquiry into reasons for the heat shield glitch is still going on, "initial findings suggested a defect in the pyro elements of the rocket which deal with the stage separation mechanism". The VSSC director said the committee probing the failure of IRNSS-1H launch will submit its investigation report much before the upcoming launches.He said the faulty satellite stuck inside the heat shield is still orbiting the outer space and is "unlikely to fall into the Pacific Ocean anytime soon". On August 31, PSLV-C39 could not deliver the 1.4-tonne IRNSS-1H in the geo orbit as its heat shield did not get separated minutes after the rocket's lift-off from Sriharikota.


Sunday 22 October 2017

SCIENCE NEWS(Gravitational waves, origins of gold: What the cosmic crash confirmed)

Gravitational waves, origins of gold: What the cosmic crash confirmed
What happened
Scientists announced Monday that after picking up two faint signals in mid-August, they were able to find the location of the long-ago crash and see the end of it play out. Measurements of the light and other energy that the crash produced helped them answer some cosmic questions.Gravitational wavesScientists, starting with Einstein, figured that when two neutron stars collide they would produce a gravitational wave, a ripple in the universe-wide fabric of space-time. Four other times that these waves were detected they were the result of merging black holes. This is the first time scientists observed one caused by a neutron star crash.Where gold comes fromThe Big Bang created light elements like hydrogen and helium. Supernovas created medium elements, up to iron. But what about the heavier ones like gold, platinum and uranium? Astronomers thought they came from two neutron stars colliding, and when they saw this crash they confirmed it. One astronomer described as a "giant train wreck that creates gold." They estimate that this one event generated an amount of gold and platinum that outweighs the entire Earth by a factor of 10.Gamma raysGamma ray bursts are some of the most energetic and deadly pulses of radiation in the universe. Astronomers weren't quite sure where short gamma ray bursts came from, but figured that a crash of neutron stars was a good bet. Watching this event confirmed the theory.Expanding universeAstronomers know the universe is expanding, and they use a figure called the Hubble Constant to describe how fast. Two different ways scientists have of measuring this speed of expansion yields two numbers that are somewhat close to each other, but not quite the same. By measuring how far the gravitational wave had to travel, astronomers came up with another estimate that was between the earlier two, but it also comes with a large margin of error.

SCIENCE NEWS (There are one million people living in the city)

There are one million people living in the city
WASHINGTON: Two NASA astronauts have successfully completed a spacewalk to install a new camera system outside the International Space Station (ISS), replacing a blown fuse and installing a new high definition camera on the starboard truss of the station.

During the spacewalk, which lasted six hours and 49 minutes, the duo of Randy Bresnik, Expedition 53 Commander and Joe Acaba, Flight Engineer at NASA worked quickly and were able to complete several "get ahead" tasks, Melanie Whiting from NASA wrote.


Acaba greased the new end effector on the robotic arm. Bresnik installed a new radiator grapple bar and completed prep work for one of two spare pump modules on separate stowage platforms to enable easier access for potential robotic replacement tasks in the future.

"He nearly finished prep work on the second, but that work will be completed by future spacewalkers,"
This was the fifth spacewalk of Bresnik's career - 32 hours total spacewalking - and the third for Acaba - 19 hours and 46 minutes total spacewalking.



Space station crew members have conducted 205 spacewalks in support of assembly and maintenance of the orbiting laboratory. Spacewalkers have now spent a total of 53 days, six hours and 25 minutes working outside the station.

Tuesday 17 October 2017

SCIENCE NEWS (Now, explore planets, moons using Google maps)

 Now, explore planets, moons using Google maps
LOS ANGELES: Google Maps has introduced a new feature that allows you to 'zoom out' from Earth and explore other planets and moons in the solar system. Some of the newer added bodies include the moons of Saturn, like Enceladus, Titan, and Mimas."Explore the icy plains of Enceladus, where Cassini discovered water beneath the moon's crust-suggesting signs of life. Peer beneath the thick clouds of Titan to see methane lakes," Stafford Marquardt, Product Manager at Google, wrote in a blogpost."You can visit these places - along with many other planets and moons - in Google Maps right from your computer. For extra fun, try zooming out from the Earth until you are in space!" Marquardt said.Google has also added Pluto, Venus, and several other moons for a total of 12 new worlds for users to explore.

Artificial intelligence can predict your personality ... simply by tracking your eyes

 Artificial intelligence can predict your personality ... simply by tracking your eyes:- It's often been said that the eyes are ...