Sunday, June 9, 2013

Final individual milestone

Introduction:

As we have come to the conclusion of this semester, as a group we have designed a mechanical space saving storage system that incorporates a natural user interface. At the start of the semester we had a bit trouble starting our project as we lacked knowledge and direction. Initially we spent our first couple of weeks on research so we were able to grasp a greater understanding on our project. We made the most progress during the final weeks of our project as we finally had the ball rolling and we had a lot of help with Steve and we are very grateful! Ultimately we developed a working prototype and a Crysis model that incorporates hand gestures and facial recognition. All in all I am very proud of my group and what we have achieved and learnt in the past semester.

Our Group Goal:

As a group, our goal was to design a prototype of a space saving mechanical system that incorporates a natural user interface. We wanted to implement our project into Crysis with the Kinect and create a physical prototype of our model.

My Individual Milestones:          

As a group we worked towards a common goal and along the way I had individual milestones that I had to meet so our group was able to stay on track and accomplish our final goal. Below is a list of my individual milestones:

Research
My first individual milestone was to acquire as much knowledge as possible to be able to begin our project. As we are Architecture Computing students we don’t have the knowledge needed in the field of mechanical and electrical engineering. I also needed to fill in my knowledge about C# and C++ code as I was only familiar with html coding.

Facial Recognition Code
For our project I needed to conclude whether facial recognition was necessary or if facial detection would be enough. Initially I conducted some research into the different types of facial recognition and which method would be best suited for my purposes. I followed a few tutorials to create a working facial recognition code for our project only to realise it was in C# and I needed it to be in C++ for it to work with Crysis. Fortunately Steve helped me set up a TCP Server so I was able to have an output from my facial recognition code to Crysis. This way I did not need to convert my C# code to C++.

Gesture Control Code
Steve provided me with the basic code for gesture control for Crysis and I was able to add to it. By adding to the code I was able to create a working gesture control code ready for Crysis incorporating hand wave right, hand wave left, hand wave up and hand wave down.

Building the Physical Prototype
With the physical model we encountered quite a few problems as we still did not know if the mechanics of the prototype we designed would technically work. If we were to do this project again, I would work backwards and seek more professional advice. We had our final physical prototype laser cut so it would slide smoothly and look more aesthetically pleasing. Even though the physical prototype was laser cut the boxes did not slide smoothly as we did not consider some technical issues.

Tasks Completed Over the Semester:

Researched the different types of facial recognition and facial detection

Developed facial recognition code with the help of Steve

Linked my facial recognition code using a TCP Server to Crysis
Problems encountered with my TCP Server
Developed gesture control code with the help of Steve
Helped create the illustrator file to send to the laser cutter\
Helped the team leader with organising our team and allocating tasks

Created a Facebook page for easier communication


If you could do this project again, what would you do differently?

Research
o   During the initial stages of the project I would have liked to have allocated our time with research more efficiently. This way we hopefully would be able to design a space saving mechanical storage system. Most importantly we should have seeked professional advice so we would have the best possible solution.
Coding
o   I initially viewed tutorials and created my facial recognition code in C#. Unfortunately C# isn’t compatible with Crysis therefore I tried to convert my C# code to C++. I spent many weeks trying to convert my C# code to C++ and was unsuccessful; therefore I started with a new set of tutorials where my facial recognition code would be in C++. Next time I would know to just use a TCP Server and send an output from my laptop to Crysis. Fortunately during this project with Steve’s help I was able to use the TCP Server with my C# code.
Prototype
o   Next time we need to seek professional advice as our prototype did not work as well as it could have. There were a few points we should have considered before constructing our prototype.
§  Testing the capability of the motors
§  Acquiring professional mechanical and electrical engineering knowledge and expertise
§  The scale of the physical prototype
§  Materials being used in the physical prototype eg. Can they be joined easily

Conclusion:

Over the last semester I have gained knowledge in many different fields. Initially I didn’t understand how we would be able to complete our project within our time frame as we lacked a lot of knowledge. I am proud of my group and how much we have accomplished in our time frame. As this project was just a prototype there were many things I would have hoped to continue on with and fix if our time frame was to be extended. Overall I feel we have succeeded in creating a space saving mechanical storage system for our target market.



Saturday, June 8, 2013

Week 13 - Problems encountered with my TCP Server

When running the facial recognition using the TCP server a HUD message comes up saying "waiting for client". Yuki and I check to make sure the local host was indeed running on TCP and was listening. When we went into the command prompt and types "netstat -an" normally 127.0.0.1 was under UDP but when we ran it when logging into Crysis it appeared in TCP but it was  "time-wait" (fig 1). We then pinged 127.0.0.1 to make sure there was an existing connection (fig 2) and it was indeed connected. We then decided to install a program called advanced port scanner and tried both ports that were open according to the program. Unfortunately we were still unable to get a connection.

Figure 1 

Figure 2

Figure 3

Friday, June 7, 2013

Week 13 - Facial recognition code using a TCP Server to Crysis

In order for me to link my facial recognition code, Steve suggested I use a TCP server and a client. He found this code below that was very helpful.


//
/*   Server Program    */
                
using System;
using System.Text;
using System.Net;
using System.Net.Sockets;

public class serv {
    public static void Main() {
    try {
        IPAddress ipAd = IPAddress.Parse("172.21.5.99");
         // use local m/c IP address, and
         // use the same in the client

/* Initializes the Listener */
        TcpListener myList=new TcpListener(ipAd,8001);

/* Start Listeneting at the specified port */       
        myList.Start();
       
        Console.WriteLine("The server is running at port 8001...");   
        Console.WriteLine("The local End point is  :" +
                          myList.LocalEndpoint );
        Console.WriteLine("Waiting for a connection.....");
       
        Socket s=myList.AcceptSocket();
        Console.WriteLine("Connection accepted from " + s.RemoteEndPoint);
       
        byte[] b=new byte[100];
        int k=s.Receive(b);
        Console.WriteLine("Recieved...");
        for (int i=0;i<k;i++)
            Console.Write(Convert.ToChar(b[i]));

        ASCIIEncoding asen=new ASCIIEncoding();
        s.Send(asen.GetBytes("The string was recieved by the server."));
        Console.WriteLine("\nSent Acknowledgement");
/* clean up */           
        s.Close();
        myList.Stop();
           
    }
    catch (Exception e) {
        Console.WriteLine("Error..... " + e.StackTrace);
    }   
    }
   
}

---------------------------------------------------------------------------

/*       Client Program      */

using System;
using System.IO;
using System.Net;
using System.Text;
using System.Net.Sockets;


public class clnt {

    public static void Main() {
       
        try {
            TcpClient tcpclnt = new TcpClient();
            Console.WriteLine("Connecting.....");
           
            tcpclnt.Connect("172.0.0.1",6000);
            // use the ipaddress as in the server program
           
            Console.WriteLine("Connected");
            Console.Write("Enter the string to be transmitted : ");
           
            String str=Console.ReadLine();
            Stream stm = tcpclnt.GetStream();
                       
            ASCIIEncoding asen= new ASCIIEncoding();
            byte[] ba=asen.GetBytes(str);
            Console.WriteLine("Transmitting.....");
           
            stm.Write(ba,0,ba.Length);
           
            byte[] bb=new byte[100];
            int k=stm.Read(bb,0,100);
           
            for (int i=0;i<k;i++)
                Console.Write(Convert.ToChar(bb[i]));
           
            tcpclnt.Close();
        }
       
        catch (Exception e) {
            Console.WriteLine("Error..... " + e.StackTrace);
        }
    }

}


//

The code above allowed us to use the output from my facial recognition code and input it to Crysis. This is possible by using a local server ip address and declaring the same port number. After incorporating the code above into my facial recognition code we were able to set up facial recognition in Crysis.

Reference:

http://www.codeproject.com/Articles/1415/Introduction-to-TCP-client-server-in-C

Monday, June 3, 2013

Week 11 and 12 - Prototype

Building the final prototype actually took longer than we expected. We didn't address a lot of design issues from our first prototype to our final prototype. A lot of the issues were caused from changing materials and assuming it would fix the problem. The main issue we faced was the connection between the boxes and the square knob inside the rail to slide. The problem being that a small plastic stick wouldn't be able to handle the weight and slide smoothly. The glue wasn't strong enough as there wasn't enough surface area to properly hold onto. We then decided to have a backing in the box where there was a whole in it to be able to make a stronger bond between the box and the knob and we also drilled a whole in the first layer of the knob. This proved to be much better but still wasn't strong enough. 

The other major problem we encountered was the glue. We first started with plastic to plastic glue which the laser cutter provided us with. This glue unfortunately wasn't strong enough and did not bond even after leaving it for 24 hours to dry. Therefore we started using a hot glue gun but it was leaving a residue around the corners so the knob inside the rail wasn't sliding properly. 

Another problem we faced was the electromagnet. Our original electromagnet was being powered by a 9v battery and the wire we were using was insulated too much. After talking to one of the Jaycar helpers we bought a copper wire that was only slightly insulated and a 12v battery. This allowed us to wrap the iron rod at least 20 times with the copper wire so our electromagnet would be stronger. 

We've learnt a lot from building this prototype and realised only to the end that we needed a lot of professional advice about our design and the feasibility of it.


Tuesday, May 28, 2013

Week 11 - Renumeration

Team Name: Vivid Pirates

Clarity of the oral presentation

The presentation wasn’t well prepared as they were all over the place. They were switching between different screens and were a bit confused within their group as to where they were in their presentation.

Clarity of the written presentation

The information they provided was everything you could find online about the subject. It would have been more interesting if they applied their own touch to it and I guess this is where the examples play a really big role.

Distinctiveness and specificity of the examples

I feel they lacked examples that related to their project that we didn’t already know about. They would have had a lot more interesting examples they could have used that would have been more suited for their topic.

Referencing

Their referencing was evident in a few places but not in all their slides.

The conceptual context:

I felt they were progressing quite well as I saw their coding progress in class. The disadvantage for them being they needed to prototype all their code and have it working perfectly before implementing it into the real world.

The still images:

There were many images and videos evident on their wiki. Some of the images they used were actually quite specific to their presentation eg. Felix’s payslip

Thursday, May 23, 2013

Week 10 - Laser Cutting

After the initial prototype we decided we needed to use another material that is slides a lot easier and looks a lot cleaner when constructing the prototype. Ben made a small plastic rail so we were able to see how well plastic slid against plastic. We decided to have our laser cutting cut out of 3mm white plastic. Unfortunately we didn't have someone double check the file being sent through so the first lot we had laser cut was not to the write measurements we wanted. Luckily enough the laser cutters agreed to give us our second lot that were the correct measurements back to us within a day so we would have enough time to put the prototype together.

Below is an image of the second laser cutting file we sent through.


Wednesday, May 22, 2013

Week 9 - 12 - Facial Recognition to Crysis

Unfortunately due to the fact that I needed my facial recognition to be in C++ and not in C#, I needed to convert my C# code to C++. After some reading I found that converting the languages is possible - though difficult due to differences in approach, the framework calls require porting to different libraries, and is often not a good candidate for a direct translation. Therefore I decided to start re-writing my code in C++ but unfortunately I still encountered problems and because I am very new to coding it tok me a very long time to understand the syntax and not get confused between both the languages. I then decided to start fresh and start another facial recognition code in C++. I found this tutorial online at http://www.shervinemami.info/faceRecognition.html.

The facial recognition according to his code runs as below:

1. Grab a frame from the camera 

// Grab the next camera frame. Waits until the next frame is ready, and
// provides direct access to it, so do NOT modify or free the returned image!
// Will automatically initialize the camera on the first frame.
IplImage* getCameraFrame(CvCapture* &camera)
{
 IplImage *frame;
 int w, h;

 // If the camera hasn't been initialized, then open it.
 if (!camera) {
  printf("Acessing the camera ...\n");
  camera = cvCreateCameraCapture( 0 );
  if (!camera) {
   printf("Couldn't access the camera.\n");
   exit(1);
  }
  // Try to set the camera resolution to 320 x 240.
  cvSetCaptureProperty(camera, CV_CAP_PROP_FRAME_WIDTH, 320);
  cvSetCaptureProperty(camera, CV_CAP_PROP_FRAME_HEIGHT, 240);
  // Get the first frame, to make sure the camera is initialized.
  frame = cvQueryFrame( camera );
  if (frame) {
   w = frame->width;
   h = frame->height;
   printf("Got the camera at %dx%d resolution.\n", w, h);
  }
  // Wait a little, so that the camera can auto-adjust its brightness.
  Sleep(1000); // (in milliseconds)
 }

 // Wait until the next camera frame is ready, then grab it.
 frame = cvQueryFrame( camera );
 if (!frame) {
  printf("Couldn't grab a camera frame.\n");
  exit(1);
 }
 return frame;
}

2. Convert the colour frame to greyscale

 // If the image is color, use a greyscale copy of the image.
 detectImg = (IplImage*)inputImg;
 if (inputImg->nChannels > 1) {
  size = cvSize(inputImg->width, inputImg->height);
  greyImg = cvCreateImage(size, IPL_DEPTH_8U, 1 );
  cvCvtColor( inputImg, greyImg, CV_BGR2GRAY );
  detectImg = greyImg; // Use the greyscale image.
 }

3. Detect a face within the greyscale camera frame

// Perform face detection on the input image, using the given Haar Cascade.
// Returns a rectangle for the detected region in the given image.
CvRect detectFaceInImage(IplImage *inputImg, CvHaarClassifierCascade* cascade)
{
 // Smallest face size.
 CvSize minFeatureSize = cvSize(20, 20);
 // Only search for 1 face.
 int flags = CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH;
 // How detailed should the search be.
 float search_scale_factor = 1.1f;
 IplImage *detectImg;
 IplImage *greyImg = 0;
 CvMemStorage* storage;
 CvRect rc;
 double t;
 CvSeq* rects;
 CvSize size;
 int i, ms, nFaces;

 storage = cvCreateMemStorage(0);
 cvClearMemStorage( storage );


 // Detect all the faces in the greyscale image.
 t = (double)cvGetTickCount();
 rects = cvHaarDetectObjects( detectImg, cascade, storage,
   search_scale_factor, 3, flags, minFeatureSize);
 t = (double)cvGetTickCount() - t;
 ms = cvRound( t / ((double)cvGetTickFrequency() * 1000.0) );
 nFaces = rects->total;
 printf("Face Detection took %d ms and found %d objects\n", ms, nFaces);

 // Get the first detected face (the biggest).
 if (nFaces > 0)
  rc = *(CvRect*)cvGetSeqElem( rects, 0 );
 else
  rc = cvRect(-1,-1,-1,-1); // Couldn't find the face.

 if (greyImg)
  cvReleaseImage( &greyImg );
 cvReleaseMemStorage( &storage );
 //cvReleaseHaarClassifierCascade( &cascade );

 return rc; // Return the biggest face found, or (-1,-1,-1,-1).

4. Crop the frame to just show the facial recognition using cvSetImageROI() and cvCopyImage().

5. Pre-process the face image

// Either convert the image to greyscale, or use the existing greyscale image.
IplImage *imageGrey;
if (imageSrc->nChannels == 3) {
 imageGrey = cvCreateImage( cvGetSize(imageSrc), IPL_DEPTH_8U, 1 );
 // Convert from RGB (actually it is BGR) to Greyscale.
 cvCvtColor( imageSrc, imageGrey, CV_BGR2GRAY );
}
else {
 // Just use the input image, since it is already Greyscale.
 imageGrey = imageSrc;
}

// Resize the image to be a consistent size, even if the aspect ratio changes.
IplImage *imageProcessed;
imageProcessed = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);
// Make the image a fixed size.
// CV_INTER_CUBIC or CV_INTER_LINEAR is good for enlarging, and
// CV_INTER_AREA is good for shrinking / decimation, but bad at enlarging.
cvResize(imageGrey, imageProcessed, CV_INTER_LINEAR);

// Give the image a standard brightness and contrast.
cvEqualizeHist(imageProcessed, imageProcessed);

.....  Use 'imageProcessed' for Face Recognition ....

if (imageGrey)
 cvReleaseImage(&imageGrey);
if (imageProcessed)
 cvReleaseImage(&imageProcessed);

6. Recognise the person in the image

After talking to Steve he told me to use a TCP Server to connect my output from my facial recognition to CryEngine. He told me to do the following to link his TCP Server code to the facial recognition:

"Your main() loop from the C++ should go into the Update() function. Any variables declared outside the main() loop should become member variables (ie, declared in the .h file, then optionally instantiated in the constructor in the .cpp file). Any #include lines you have in your face detection code should go in the .h file, *below* the existing #include lines." Steve

Unfortunately I didn't have a .h file and everything was in my .cpp. As for my main() loop it was unfortunately incorrect as the actual facial recognition wasn't in that loop. I went to Steve to fix up this problem and he said the easiest solution was to use the TCP Server and use the old C# code. Please follow the link below to see Facial recognition code using a TCP Server to Crysis.