MTL Paper Data

I want to extract data for writing a paper about end to end deep learning for self driving cars using Multi Modal or Multi Task Learning. To do this I have access to a large dataset of driving data from 1/10th scale model cars.

Some of the modes of data available to me:

  • Direct Mode (Driving on a Path or Sidewalk)
  • Follow Mode (Following another model car)
  • Furtitive Mode (Hides in bushes and darts in open areas)
  • Race Mode (High Speed Data from Race Track)
  • Play Mode (Kids Driving Cars and Playing Around)

My goal is to train MTL (Multi Task Learning) and Non MTL networks and compare teir performance on these datasets. To initially narrow down the data, I completely remove Race and Play Mode from consideration since a lot of this data is nonsensical and hard to train on.

Furthermore since I will be using a Non Multi Task Learning Network where I will not be telling it the modal information, I should use distinguishable driving modes to make it fair. The two that are the most easily distinguishable are Direct and Follow since 99% of the time if there is another car in the frame it is a Follow Mode shot.

Next it was on to the network to use, I decided on SqueezeNet as this is the network my colleague Tushar Pankaj and I have been working on for quite some time.Here is a picture of the original SqueezeNet network and the modifications I made for the MTL and Non MTL driving Networks:

Picture

Figure 1: (Left) Original SqueezeNet for Classification, (Middle) SqueezeNet for Driving w/ No MTL, (Right) SqueezeNet for Driving w/ MTL

For my first testing, I decided to compare the following:

  1. An MTL Network Trained on Direct + Follow Data and validated only on an unseen Direct dataset
  2. A Non MTL Network Trained only on Direct Data and validated on an unseen Direct dataset
  3. A Non MTL Network Trained on Direct + Follow Data and validated on an unseen Direct dataset

If I can show that by introducing Follow data unrelated to the task “transfer learning” of some kind actually occurs between modes and the network improves performance on a validation set then this will be a relevant result. If I can show that the MTL Network outperforms the Non MTL network as well, then this will be a strong affirmation of this training style. Networks are training now, should get results in the next few days. :v:

Written on August 13, 2017