Deploying AI models

Once an AI model has been trained and tested, it can be deployed in a variety of ways. One option is to deploy the model on-board a device, such as a UAV.

This allows for real-time processing of image data, but it can be challenging to fit the model and its associated computing resources onto the device. Another option is to collect data in the field and then process it using a more powerful computer system located elsewhere. This approach can be more scalable, but it can introduce delays between data collection and processing.

The most common way to deploy an AI model is to use a cloud-based interface. This approach offers several advantages, including access to high-performance computing resources for models that are increasingly computationally demanding, and a convenient, easy-to-use interface.

However, it can also be challenging to deploy and maintain cloud-based systems in field settings, where reliable internet connections may not be available.

Additionally, data sharing and privacy concerns can be important considerations when using cloud services.

Finally, as these frameworks are provided as a fee for service, the high use of the model can result in significant deployment costs that would not necessarily result from models run on dedicated local computer hardware.

Common cloud based deployment frameworks include Amazon SageMaker, Google Cloud AI Platform, and Microsoft Azure Machine Learning.