In this folder, we show various examples in a notebook for running Llama model inference on Azure's serverless API offerings. We will cover: