Khaled Fouad Elsayed Elkhamy, Ph.D.

Research

Over my research career, I have focused on a multitude of problems directed towards communication network performance and architectures. These problems are solved using techniques from Optimization Theory, Queueing Theory, Game Theory, Graph Theory, Control Theory, Discrete-Event Simulation, and most importantly common sense! My work also included communication protocols implementation in embedded platforms.

My current research work focuses on radio resource management and architectures for 4G/5G wireless systems; devising scalable standards-based Internet of Things systems; and applications of machine learning in telecommunications.

I have been very successful in applying research to real world products. I have been actively involved in various roles in many industrial companies since my early career including multi-nationals (Nortel, Auspex Systems, Wimatek Systems, Synopsys) and Egyptian companies that have direct presence in the International market (SySDSoft, ITWorx, SI-Vision). My research either benefited the development work for many of the products I helped bring to the market or tremendously benefited from the industrial work by having a dimension linked to real-world problmes. I am an academic who likes to get his hands dirty! I do enjoy getting products out to the market as much as exploring new concepts and writing reseach papers on the bleeding edge of technology.


I) Radio Resource Management and Architectures for 4G/5G Wireless Systems

Demand on wireless "always on" connectivity has witnessed phenomenal growth in the past decade. The growth rate in wireless access demand has exceeded both the communication channel capacity limits and Moore's law for the communication circuit density. It has become evident that meeting the demands will not be met by capacity approaching coding methods or by more complex signal processing techniques as was the case when evolving from 1G to 2G or 2G to 3G wireless. Moreover, with the expected massive deployment of IoT devices in beyond 4G (B4G) and 5G systems, a new communication paradigm arises where a huge number of devices (that are constrained in energy and computing resources) access the wireless channel for sporadic short periods of time. This raises challenges to traditional radio system planning techniques. B4G/5G wireless systems new system and communication architecures/paradigms and new radio resource management technqiues are obviously needed.

My research in this area tackles multiple problems in this domain:

  • Radio resource managment in heterogeneous wireless systems: These systems comprised of traditional cells along with pico/femto cells, coordinated multi-point transmission (CoMP), and relay enhanced cells. This includes problems in inter-cell interference coordination, coordinated scheduling for multi-traffic types, distributed MIMO, and optimized cell association/load balancing. Moreover, we explore new paradgims such as device-to-device (D2D) communicaions and optimize radio resources in this problem as well.
  • Optimzing the resource allocation in non-orthrogonal multiple-access (NOMA) Systems NOMA systems such as Sparse Code Multiple Access (SCMA). NOMA schemes are suitable for packing more users over limited wireless channels by eliminating the need for orthrogonal resource allocation as in OFDMA systems. NOMA schemes are recommended for supporting massive machine-type communications (MTC) expected in large scale IoT deployments. However, non-orthogonality results in interference and innovative resource management and receiver designs are needed to remedy the associated problems. We explore new methods for codebook assignment and power allocation to alleviate the problems brought in by non-orthogonality.
  • Massive machine type communications : In massive machine type communications systems, sending short busrts of data are common place in IoT deployments. Traditional access methods in systems such as LTE depend on the uplink random access channel (RACH) where users send connection requests to the base station via the RACH channel and the base station subsequently allocates propoer resources. This process is horrondeously ineffiecient for short bursts. We devise new methods for using the RACH channel as the main way of communication for MTC devices that are involved in a sporadic communication pattern. We have used Q-learing as a means to devise efficient algorithms for optimizing the RACH channel for usage for both traditional users and MTC devices.


II) Scalable Standards-based Internet of Things Systems

The Internet of Things (IoT) is a world where physical objects are seamlessly integrated into the Internet. Mainly, these physical objects are connected in order to become active participants in business processes. Applications of IoT are so massive in scale to the limit that some industry analysts predict it would have an impact leading to the 5th industrial revolution.

Physical connectivity of objects is only one small step in the direction of realizing the full potential of IoT. Extrapolating from the history and success of today’s Internet, it was not until the adoption of standardized protocol such as Hyper-Text Transfer Protocol (HTTP) and Hyper-Text Markup Language (HTML) in the early 1990’s, when the massive adoption of Internet truly occurred. The definition of HTTP and other application-level protocols in addition to languages for building applications over the ubiquitous connectivity offered by the IP protocol was the major enabler of the success of the world-wide-web as we use and increasingly depend on it. Today, cloudification of IT business services is a major trend that is only strengthened by the further advancements in virtualizations and high level abstractions that enable constructions of virtualized cloud-based services such as OpenStack and OpenDayLight

Another issue that will pose significant challenge on the deployment of large scale IoT systems is security. There have been security breaches in which attacks disrupt opertion of the Internet by manipulating the IoT simple nodes that are not well secured.

My research in IoT tackles multiple problems a sample of which follows herein:

  • IoT platforms based on IPv6/Thread: IPv6 is a key in unifying the heterogeneous protocols inherent in IoT environments. By deploying IPv6 as the unifying network layer protocol, application layer protocols such as MQTT, CoAP, HTTP can then be used on top of IPV6 at the IoT end nodes. This emanles smooth development and deployment standards-based IoT applications. Issues in IPv6 routing using low power nodes, optimized neighbour discovery, and trade-off between connectivity and preserving energy by duty cycling have been in our research agenda. Also, service disovery protocols in IPv6 IoT deployments have been investigated.
  • Facilitating large-scale data-driven IoT Deployment: The goal of this research is to increase the adoption of IoT services and infrastructure by tackling one of IoT main roadblocks, namely deployment complexity. Toward this goal, we propose to develop an interoperable and scalable platform for mashup of services provided by IoT devices. More specifically, we propose to achieve three objectives: 1) to develop a protocol for service discovery independent of the underlying application layer, network layer, or physical layer. Our proposed work will be based on recent IETF draft as well as our previous work in this area; 2) to develop a service composition/mashup construction software tool capable of enabling developers to construct “services” using mashups of discovered nodes and their declared services; and 3) to develop a market-ready product prototype for energy-efficient buildings using air-conditioning and lighting systems control based on the developed techniques in 1 and 2 and several other enhancements needed to take this prototype to successful commercialization


Image copyright by Nvidia Corporation

III) Applications of Machine Learning in Telecommunications

In this area, we apply machine learning models to some interesting problems in networking and communications. Our first target is using deep learning to identify malware in Mobile and IoT environments. This research is performed with the Candaian Company Wedge Networks. The growth of mobile devices and IoT applications and consequently, the number of malicious software (or malware) is rapidly increasing in recent years, which calls for the development of advanced and effective malware detection approaches. Traditional methods such as signature-based ones cannot defend users from an increasing number of new types of malware or rapid malware behavior changes. In this research project, we propose a new real-time mobile malware detection approach based on deep learning and static analysis. Instead of using Application Programming Interfaces (APIs) only, we further analyze the source code of Android applications and create their higher-level graphical semantics, which makes it harder for attackers to evade detection. In particular, we use a call graph from method invocations and try to execute such verdict in real-time. Deep learning techniques such as recurrent neural networks and graph convolutional networks will be used as the base model for the malware analysis. Acceleration via the usage of Graphics Processing Units (GPUs) and/or Field-Programmable Gate Arrays (FPGAs) will be pursued in order to expedite the learning and decision-making process.