AI Chipset - Satellite signal processing using an off-the-shelf AI chipset

  • Status
    Ongoing
  • Activity Code
    3A.122
Objectives

The project aims to demonstrate the feasibility of applying AI algorithms in different scenarios for wireless communication applications, including:

  • Large dataset generation,

  • AI model design,

  • Implementation in a suitable hardware configuration

The following objectives have been realized during the project:

  • Identification of the best candidate satellite scenario for on-board offthe-shelf AI accelerator;

  • Identification of the best machine learning architecture compatible with off-the-shelf AI accelerators, and collect/generate the required input data sets;

  • Development of the setup for the machine learning with validation, execution of the machine learning while fed by the input data sets;

  • Establishment of a complete and self-consistent set of technical requirements for an AI Satellite Telecommunication Testbed (AISTT) that will carry the off-the-shelf AI accelerator and will validate the performance of the AI-based technique in the laboratory;

  • Identification of technology gaps for TRL 9 and establish a roadmap for an in-orbit validation to raise the TRL closer to market readiness.

Challenges

The following challenges apply:

  • Selection of the correct Machine Learning (ML) technique and scope. In this project a model trained with an SDR hardware based setup was reported to perform well on that data (based on AI metrics such as F1, Recall, Precision and Accuracy), but turned out to not perform well on a MATLAB generated dataset. The challenge resides in having a wide range of datasets, with different level of complexity, and generated on different platforms, to train the AI model.

  • Selection of the correct hardware to run the AI algorithms. It is key to have a component with sufficient capacity to compute all the AI algorithm inside.

Benefits

Artificial Intelligence has proven its value in several fields. In the case of satellite telecommunications, it is a promising tool for many applications, for instance spectrum sharing or cognitive radio. AI algorithms however, can be very computationally intensive, thus power hungry and slow when run in standard processors. These issues seriously limit their use in a variety of devices and therefore applications. To solve this, industry has developed specific AI processors capable of running complex algorithms in real time and consuming a fraction of the power with respect to the one needed by traditional processors. These AI processors are now available in the market, even as Off-the-Shelf AI chipsets, allowing inferencing to be performed in an effective manner.
The availability of such chipsets has enabled the practical use of AI for satellite applications both for on-ground and on-board scenarios. As a matter of fact, AI chips have been already investigated by ESA for Earth Observation satellites. In that particular case, the chosen AI chip will allow the fast execution of machine learning models for image recognition on board the satellites. It is only evident that a similar chip could also be used for telecommunication satellites, for instance for signal processing.

Features

The AI algorithm selection, justified by a performance assessment based on a custom generated dataset for training and validation, is performed on a cloud platform.

An integrated Artificial Intelligence Satellite Telecommunications Testbed (AISTT) is built to deploy this trained AI processor. This AISTT testbed can:

  • Generate the RF signal with full control over active interference and noisy conditions

  • Sample the RF spectrum and

    • Detect and classify interference (scenario 1)

    • Determine the optimal DVBS2 MODCOD for the wireless environment (scenario 2)

  • Perform additional training using semi-supervised learning

System Architecture

The AISTT is made of a AISTT Transmitter including a txProcessor and txFrontend, as well as an AISTT Receiver including a rxFrontend, rxProcessor and MLEngine.

  • The txProcessor is the data generator process equipped within an external PC which calculates the combined DVBS2 signal, interference and AGWN channel signal stream

  • The txFrontend is the data generator front-end equipped within an Ettus USRP B205 mini-i (SDR) which performs analog to digital conversion and up-conversion of the signal stream.

  • The rxFrontend is the ML receiver front-end equipped within an Ettus USRP B205 mini-I (SDR) which performs RF signal downconversion, sampling and digitising.

  • The rxProcessor is the ML processor and it is equipped, as well with the MLEngine, within the Unibap iX10-100 with ROCm GPU and Movidius VPU. The Unibap platform is the Linux development platform to run AI algorithms with python capabilities and AI accelerators (VPU, GPU,..)

Plan

 The activities were organized in four tasks:

  • Task 1: Identification and definition of scenarios, for which the AI algorithm will be applied;

  • Task 2: Generation of datasets that match with the defined scenarios on which the AI algorithm can be trained;

  • Task 3: Design, training and validation of AI algorithms for both scenarios.

  • Task 4: Design and deployment of AISTT with LEO orbit hardware, running general software and the AI algorithms, as well as an external dataset generator.

The following milestones applied: SRR, DSR (Data Sets Review), MLR (Machine Learning Review), HWDR (Hardware Design Review), TRR, TRB, FR and FP. 

Current status

The main results of the AI Chipset project concern:

  • A demonstration of AI algorithms to solve complex wireless communication problems, with greater accuracy than classical approaches

  • A selection of efficient hardware suitable for deployment in LEO applications

  • A thorough understanding of the development of datasets for AI algorithm development

  • A better understanding of how a generalised AI model can be obtained, ready for deployment

The TRB/FR is planned for the end of the year 2022.

Prime Contractor

Subcontractors