You are currently viewing Caching for Self-Driving Cars in Multi-access Edge Computing

Caching for Self-Driving Cars in Multi-access Edge Computing

In this blog, Dr. Anselme Ndikumana proposes to use deep learning based caching for self-driving cars in multi-access edge computing (MEC). Anselme received the B.S. degree in computer science from the National University of Rwanda, in 2007. He earned the Ph.D. degree in computer engineering from Kyung Hee University, South Korea, in August 2019. Currently he is a Lecturer at the Faculty of Computing and Information Sciences, University of Lay Adventists of Kigali, Rwanda. His research interests include deep learning, multi-access edge computing, information-centric networking, and in-network caching.

This article is based on the paper by Anselme and his collaborators published in IEEE Transactions on Intelligent Transportation Systems in March 2020.

Let’s hear from Dr. Ndikumana!

Anselme_Ndikumana

Today passengers in cars choose the infotainment contents to display or play during the journey. As we move towards self-driving cars, the interior in these cars will have new spaces that can be used for enhanced infotainment services. For passengers, self-driving cars will be a new place for engaging in diverse infotainment services/contents such as movies, TV, music, and games. This can also help with emerging technologies such as Virtual, Augmented, and Mixed Reality. In this regard, a self-driving car must deliver suitable and appropriate infotainment contents for the passengers based on age, geography, etc. Also, the cached contents should not violate any content access policies. For this, the decision to cache infotainment contents should depend on the characteristics of passengers such as age, emotion, and gender. In addition, retrieving infotainment contents from data centers can hinder services due to high end-to-end delay.

Self Driving Car Dashboard

A self-driving car can leverage nearby caching and computing facilities such as multi-access edge computing (MEC) servers to mitigate such issues. Multi-access edge computing (MEC) is a network architecture concept that enables cloud computing capabilities and an IT service environment at the edge of any network.

A novel approach is proposed that leverages deep learning and optimization methods to address above mentioned challenges. First, deep learning models are used to predict the contents that need to be cached in self-driving cars itself and close proximity of self-driving cars in MEC servers attached to roadside units (RSUs). RSUs are computing devices located on the roadside to support passing vehicles. Then, to retrieve infotainment contents to cache, we define a communication model.

A model to perform caching for self-driving car is adopted for the retrieved contents. Note that cached contents can be served in different formats/qualities based on demands, such as MP4/1024×768. We can offer a computation model to cater these requirements for the cached contents. We can develop an optimization problem whose goal is to link the proposed models into one optimization problem that minimizes the delay in downloading the content. It is difficult to obtain the optimal solution for such formulated problems, which are usually non-convex and NP-hard. We can rely on techniques such as block successive majorization-minimization.

System Model to Caching for Self-Driving Cars
System model for deep learning based caching for self-driving cars.

In the system model shown in figure above, the data center (DC) and dataset is used to make, train, and test deep learning models. Some examples of such models are Convolutional Neural Network (CNN) and Multi-Layer Perceptron (MLP). These models will be used for predicting passengers’ features and infotainment contents that need caching for self-driving cars. We want to reduce the communication delay between self-driving cars and the data center. For this the trained and tested deep learning models are deployed at MEC servers attached to the Road Side Units (RSUs). During off-peak hours, each RSU downloads CNN model and MLP output by using backhaul communication resources. Then, by using the MLP output, each MEC server downloads and caches predicted infotainment contents. Here, we can consider that people from different areas may need different infotainment contents.

For a self-driving car, we consider that each of such cars has On-Board Units (OBUs) that can support caching and computation of infotainment contents for passengers. The reason we chose self-driving cars is because these already have OBUs with Graphics Processing Units (GPUs), Field Programmable Gate Array (FPGA), and Application Specific Integrated Chip (ASIC) that can handle in-car AI. Each self-driving car can get broadband Internet service from RSU.

To predict the passengers’ features, we can use the CNN model. This helps in deciding which infotainment contents to request and cache in the self-driving car that meet passengers’ features. During off-peak hours, each self-driving car downloads CNN model and MLP output from RSUs. By using the k-means and binary classification, the self-driving car compares its CNN prediction with the predicted output from MLP. This helps the self-driving car identify the infotainment contents that are appropriate to the passengers’ features. Finally, the self-driving car downloads and caches the identified contents that meet passengers’ features.

The simulations in this article demonstrate that such a caching approach can reduce 61% of the backhaul traffic, i.e., caching for self-driving cars can serve 61% of the whole demands for infotainment contents. The prediction for the infotainment contents that need to be cached at the RSUs and the self-driving cars reaches 97.82% accuracy.

Contributed by Dr. Anselme Ndikumana

Statements and opinions given in this blog are the expressions of the contributor(s). Responsibility for the content of published articles rests upon the contributor(s), not on the IEEE Communication Society or the IEEE Communications Society Young Professionals.

This Post Has One Comment

Leave a Reply