RSS Feeds


mod_vvisit_counterThis week116

Designed by:
SiteGround web hosting Joomla Templates
Aquila 1.8alpha released

I would like to announce that we have just released a new version of Aquila. This is the first alpha version (v1.8a), which is much more stable that the previous versions. Apart from several bug fixes, this version has two major additions. The first improvement is in terms of stability where we replaced the previous rendering implementation causing occasional segmentation faults with a new one that is reliable and equally fast. The second addition is a new ERA (Epigenetic Robotics Architecture) module, which adds the functionality of the ERA architecture in a fully configurable manner rather than hard wired as in the MODI experiment tab. More advanced visual filters and free speech via yarp ports are also included.

There are few new developers working on the extension of the Kinect module to improve the precision of the control, make the interaction more fluid and add iCub legs control. Other developers are looking into developing an EEG interface to control objects in the simulator.

Aquila can be downloaded from iCub and ITALK repositories as well as from the main SourceForge repository that we use for the development. New developers are welcome to join the project via SourceForge and have their own branch in the repository. Aquila 1.7b manual is in the root directory and provides basic instructions on how to use different modules and also guidelines on how to compile the software. The 1.8a version of the manual is not yet finished but if you have any problems installing the latest version let us know.

The latest updates are always on Aquila's FaceBook page, click here to explore it.

Last Updated on Sunday, 27 November 2011 19:38
iCub's Upgrade
iCub's hands

Our humanoid robot iCub is going to be upgraded with new touch sensors and artificial skin. First, the robot's hands will be sent to IIT where skilled technicians will install new fingertips and the skin on each forearm. Later, we will send the whole robot for a complete upgrade. 

I have met today with our technician Bill, who also attended iCub technical training at IIT back in 2008. We have carefully disconnected all the wires and removed iCub's arms, which will be sent to Italy this Friday. In the meanwhile, we can keep using the remaining parts of the robot and carry on doing our research.

The video below shows how will our iCub look once the upgrade is completed. The video is also interesting because it shows the force control system utilising the new skin, which is a big advantage because one can apply force anywhere on the skin and the arm will move correctly. In contrast, the force control system that does not use skin assumes that one only applies forces at the end effectors, which are the hands/wrists.

You need to a flashplayer enabled browser to view this YouTube video 

Last Updated on Tuesday, 22 November 2011 23:10
LightHead - the retro-projected face with real-time animations


In 2009, PhD researcher and my good friend Frédéric Delaunay from The University of Plymouth pioneered a novel design for a humanoid face. Fred is doing his PhD as part of The CONCEPT Project led by Tony Belpaeme. His design is cheaper and more robust alternative to other robot face technologies because it is based on retro-projected live-generated face animations. As you will see in the video below many current designs face serious problems when it comes to their price, maintenance or realness of face expressions and HRI (Human Robot Interaction)  experience. Fred's system uses a small laser projector that projects real-time face expressions on a semi-transparent face mask. These real-time animations are generated through his software based on an open source 3D content creation suite Blender.

Fred is currently extending the system so it could elicit rich and dynamical expressions that are not only based on the animated face expressions but also on their dynamics as well as the dynamics of head movements. I think we have a lot to look forward especially because many ongoing projects are using similar retro-projected faces. If you would like to stay updated on Fred's work then follow the LightHead page on Google+.

You need to a flashplayer enabled browser to view this YouTube video

Last Updated on Thursday, 17 November 2011 17:59
NVIDIA CEO Jen-Hsun Huang mentions my research during the SC11 conference in Seattle
You need to a flashplayer enabled browser to view this YouTube video
Last Updated on Wednesday, 16 November 2011 12:02
Technology Behind Google Autonomous Car
Google Autonomous Car

Google have been working hard on their autonomous vehicle, which has already driven over 190 000 miles. Last month's IROS conference, where I demonstrated Aquila, was also the place where Google unveiled details about the technology used in their autonomous car. 

"According to the World Health Organization, more than 1.2 million lives are lost every year in road traffic accidents. We believe our technology has the potential to cut that number, perhaps by as much as half. We’re also confident that self-driving cars will transform car sharing, significantly reducing car usage, as well as help create the new “highway trains of tomorrow." These highway trains should cut energy consumption while also increasing the number of people that can be transported on our major roads. In terms of time efficiency, the U.S. Department of Transportation estimates that people spend on average 52 minutes each working day commuting. Imagine being able to spend that time more productively.

We’ve always been optimistic about technology’s ability to advance society, which is why we have pushed so hard to improve the capabilities of self-driving cars beyond where they are today. While this project is very much in the experimental stage, it provides a glimpse of what transportation might look like in the future thanks to advanced computer science. And that future is very exciting." 

You need to a flashplayer enabled browser to view this YouTube video

Last Updated on Thursday, 20 October 2011 16:10
Internet - the extention of the mind

I was watching 'We are all cyborgs now' Ted talk by Amber Case about the impact of technology on our evolution. I found it quite interesting and if you do as well then you might want to read Andy Clark's 'Supersizing the Mind' book, which is very relevant and provides much deeper and detailed insights into workings of the human mind.

"Supersizing the Mind is tantalizing in many respects, and Clark's ingenuity is always on display. Just as his earlier Being There launched many a research project, we expect that Supersizing the Mind will inspire a new generation of philosophers, psychologists, and artificial intelligence researchers to reconsider some basic assumptions about the mind."--Lawrence Shapiro and Shannon Spaulding, Notre Dame Philosophical Reviews

Internet Connectivity Map

Internet Connectivity Map

Talented VFX artist and a big NVIDIA fan, Saurabh Mazumder


Shortly after I wrote my first article for NVIDIA I got a message from Saurabh. The message was about his mod that uses CUDA to accelerate the games. Soon after we became friends on Facebook and I found out about his artistic skills.

I was very impressed seeing his creations and when I last traveled to San Francisco, I asked Saurabh to design slides for my presentation at NVIDIA headquarters. He did not refuse and came up with a nice design within a couple of hours.

Apart from being a great artist, programmer and a friend, Saurabh is also highly motivated individual who will most definitely achieve a lot in his life. This post is dedicated to you brother, keep it up!


Last Updated on Friday, 07 October 2011 02:20
Presentation at NVIDIA Headquarters in Santa Clara
Some of you will remember that around a year ago I wrote an article for NVIDIA about Aquila and the important role of CUDA architecture in my PhD research. I showed some of my preliminary results on action acquisition using  a GPU-accelerated multiple timescales recurrent neural network (MTRNN) able to learn several behaviours and control the iCub humanoid robot. This initial study was published in ICDL-EPIROB proceedings this year.

It is incredible to look back and see how much Aquila progressed since then, both in terms of features as well as worldwide recognition. Aquila's PCCAT2011 publication got the best paper award. It was also presented and demonstrated during IJCNN and IROS conferences this August and September respectively. This is just the beginning, expect major advancements towards the end of this year as I am going to be extending the existing MTRNN system with a biologically inspired vision system and a linguistic module. Anthony Morse, who is developing in Aquila, also plans to extend his research and take it few steps further.

Just before going to San Francisco I was approached by Calisa Cole who invited me to do a presentation at NVIDIA's headquarters in Santa Clara, which was just one hour drive from my hotel. I could not refuse and prepared over 50 slides covering everything from the ITALK project, Aquila to the Synergy Moon project, which I joined couple of months ago.
me at NVIDIA

I invited Kevin Myrick, one of the leading team members of the Synergy Moon, to go with me to NVIDIA. Kevin did not refuse since he lives very close and is equally interested in AI and parallel programming. Kevin also helped me with slides for the synergy moon part of my talk that were highlighting areas where parallel computing would greatly improve our systems used for mission planning and control. I should not forget to mention my friend and a big fan of NVIDIA, Saurabh Mazumder, a very talented young man and digital artist who created design for my slides.

I met Kevin this Thursday in front of  the Hilton hotel where I was staying during the IROS conference. We were picked up by NVIDIA's car service at 11:45 and arrived at NVIDIA's headquarters just before 1pm. The driver was really nice and kept telling us how much he loves working for NVIDIA.
NVIDIA car service and happy driver 
The NVIDIA headquarters building and the surrounding environment were beautiful. When I entered to the main reception I was very nicely surprised as I found a lovely and warm welcome at the reception desk.

NVIDIA headquarters 
We signed in to their central system, took few pictures at the reception and shortly after were welcomed by Calisa who took us to the conference room. I tested my iPad with their projector and sound system and everything was working perfectly. Shortly after, refreshments arrived and people started gathering in the room.
Kevin Myrick and myself at NVIDIA headquarters 
Calisa introduced me and asked everybody in the room to introduce themselves. All I can say is that I felt honoured to have the audience I had. The talk was titled "CUDA-based approach to cognitive robotics" and put a lot of emphasis on the importance of CUDA in our current and future research for ITALK, Poeticon+ and Synergy Moon project. I did not forget to mention that Anthony Morse and myself have been promoting CUDA at The University of Plymouth (see my previous article on CUDA and its role in the future), that we have built a GPU cluster and have done few talks on CUDA. I also mentioned that I have submitted two research proposals to NVIDIA. One on behalf of our research group on Plymouth (Fermi cards) and the other one on behalf of Synergy Moon (Tegra systems).
presentation at NVIDIA headquarters 
The talk lasted for one hour after which we had a short Q/A session followed by an informal chatting where we received lots of invaluable information and small presents from NVIDIA.

I was invited to come again in May and participate in the most important event that NVIDIA organises, the GTC conference, usually attended by over 2000 people. NVIDIA's CEO will be highlighting the major achievements and there is a possibility that some of my work and perhaps some of our plans to use Tegra systems in Synergy Moon will be included in his story.
This link will take you to the whole 'San Francisco (IROS and NVIDIA) 2011' album.
Last Updated on Saturday, 01 October 2011 11:27
International Conference on Intelligent Robots and Systems

International Conference on Intelligent Robots and System took place at Hilton hotel in San Francisco. The conference was extremely busy and had over 1000 attendees. Giorgio Metta, Francesco Nori, Lorenzo Natale, Marco Randazzo and other guys from IIT were already there when I arrived and our iCub's stand was nicely presented.

 iCub stand


Our proposal for demoing Aquila was accepted and also featured for the spotlight presentation that I did this Monday. I got a chance to talk to quite a big audience about Aquila and some of the demos I was going to present on Wednesday afternoon.


Aquila spotlight presentation at IROS2011  

Everything turned out well despite of having limited time to set my demos. The iCub teleoperation demo was very popular and I let few people try it and control the iCub with their own bodies. We ended up having all sorts of fun with it like trying to lift or slide a box on a table and similar stuff.


Lorenzo teleoperating iCub 

This link will take you to the whole 'San Francisco (IROS and NVIDIA) 2011' album.

Last Updated on Monday, 03 October 2011 15:23
Aquila 1.75 beta released

New version of Aquila 1.75b was just released. This release caught-up with the latest versions of CUDA, OpenCV, Qwt and YARP and should be easier to install. Aquila manual has also been updated and describes installation and every single module in detail (32 pages).

New developers are welcome to join through this site and the latest updates are always on Aquila's FaceBook page, click here to explore it.

Last Updated on Friday, 23 September 2011 14:42
More Articles...
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>

Page 6 of 23