Integrating Edge Xpert and OpenFaaS

Serverless computing, also known as Function as a Service (FaaS), allows developers to build applications with stateless functions without worrying about the underlying infrastructure. A serverless platform handles the lifecycle, execution and resource scaling of the functions that run only when invoked or triggered by an event. Serverless computing eliminates the need for always-on resources by creating a new container instance to host a function when it is called and destroying the container when it completes. This on-demand function execution model can complement the edge computing environment, where hardware platforms have limited resources. Incorporating serverless computing at the edge of an IoT network to run single-purpose functions can reduce the overall resource consumption of these functions. Additionally, as most serverless platforms support a variety of modern programming languages to implement functions, integrating serverless platforms with edge computing provides the flexibility for developers to choose their preferred programming languages.

Several commercial public cloud service providers, including Amazon AWS, Microsoft Azure, and Google Cloud, offer their own serverless computing. However, there are certain limitations on the public cloud platforms, such as vendor lock-in and restrictions on the computation of the functions. Open source serverless frameworks can be used to run serverless computing on private infrastructure, avoiding any form of vendor lock-in.

This document demonstrates the steps required to integrate OpenFaaS, a popular open source serverless framework, with Edge Xpert.

For further information on OpenFaas, refer to the following:

Pre-requisites

For this tutorial, you need the following:

Installing the Kubernetes Command-line Tool

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. This document uses kubectl to do the following:

  • Deploy applications
  • Inspect and manage cluster resources
  • View logs

Run the following commands in the order shown to install the latest version of kubectl in your Ubuntu VM:

export VER=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VER/bin/linux/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/

For further information on installing and setting up the Kubernetes command-line tool, see the Kubernetes website.

Installing k3d

k3d is a utility to run k3s as a docker container.

k3s is the lightweight Kubernetes distribution by Rancher: rancher/k3s.

This document uses k3d to create a containerized k3s cluster.

Use the following command to run the k3d install script and install the latest k3d in your Ubuntu VM:

curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash

Create and Set-up a Kubernetes Cluster

  1. Create a local Kuberbetes cluster, k3s-default, to run the serverless platform, as follows:

    k3d create
    

    The output is similar to the following:

    INFO[0000] Created cluster network with ID 74fab8ded624a702fddafbc453ca71c2c655e19340b2df2cf73b21319c6a967c
    INFO[0000] Created docker volume  k3d-k3s-default-images
    INFO[0000] Creating cluster [k3s-default]
    INFO[0000] Creating server using docker.io/rancher/k3s:v0.10.0...
    INFO[0000] SUCCESS: created cluster [k3s-default]
    INFO[0000] You can now use the cluster with:
    
    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
    kubectl cluster-info
    
  2. Configure kubectl to switch to the k3s-default context, as follows:

    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
    
  3. Check the local Kubernetes cluster information using the kubectl utility, as follows:

    kubectl cluster-info
    

    The output is similar to the following:

    Kubernetes master is running at https://127.0.0.1:6443
    CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

Install and Setup OpenFaaS

To install and setup OpenFaaS, complete the following steps:

  1. Install k3sup

    k3sup is a lightweight utility, which allows you to quickly and more easily create and use a Kubernetes cluster with k3s on any local or remote VM. This document uses k3sup to install OpenFaaS.

    Use the following k3sup command to run the latest k3sup install script in your Ubuntu VM:

    curl -SLsf https://get.k3sup.dev/ | sudo sh
    
  2. Install OpenFaaS using the following k3sup command:

    k3sup app install openfaas
    

    The output is similar to the following:

    ...
    
    =======================================================================
    = OpenFaaS has been installed.                                        =
    =======================================================================
    
    # Get the faas-cli
    curl -SLsf https://cli.openfaas.com | sudo sh
    
    # Forward the gateway to your machine
    kubectl rollout status -n openfaas deploy/gateway
    kubectl port-forward -n openfaas svc/gateway 8080:8080 &
    
    # If basic auth is enabled, you can now log into your gateway:
    PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
    echo -n $PASSWORD | faas-cli login --username admin --password-stdin
    
    faas-cli store deploy figlet
    faas-cli list
    
    # For Raspberry Pi
    faas-cli store list \
     --platform armhf
    
    faas-cli store deploy figlet \
     --platform armhf
    
    # Find out more at:
    # https://github.com/openfaas/faas
    
    Thanks for using k3sup!
    
OpenFaaS is installed.
  1. Enter the following command to check if the OpenFaaS gateway– serverless functions invocation HTTP endpoint–has been deployed successfully:

    kubectl rollout status -n openfaas deploy/gateway
    

    If successful, the output is similar to the following:

    deployment "gateway" successfully rolled out
    
  2. Open a tunnel from the Kubernetes cluster to your local computer so that we can access the OpenFaaS gateway. There are several ways to access OpenFaaS, but we’ll use the following port forwarding command to achieve this:

    kubectl port-forward --address 0.0.0.0 svc/gateway -n openfaas 8081:8080 &
    
  3. Set the OPENFAAS_URL environment variable to reference the IP address and port defined in the port-forward command, as follows:

    export OPENFAAS_URL="10.0.2.15:8081"
    

Note

You must replace the IP address in this command with the IP address of your Ubuntu VM. Our Ubuntu VM uses 10.0.2.15, as shown in the above example.

  1. Install the OpenFaaS command line tool, faas-cli, to manage the OpenFaaS functions using the following command:

    curl -sLSf https://cli.openfaas.com | sudo sh
    
  2. Login to the OpenFaaS gateway before deploying any serverless functions through faas-cli using the following commands:

    PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
    
    echo -n $PASSWORD | faas-cli login --username admin --password-stdin
    

    The output is similar to the following:

    Calling the OpenFaaS server to validate the credentials...
    WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
    credentials saved for admin http://10.0.2.15:8081
    

    If you receive a connection refused error on running this command, check that the OPENFAAS_URL environment variable has been set correctly for the IP address and port defined in the port-forward command, and run the commands again.

Build and Deploy the First Serverless Function in Python

To build and deploy the first serverless function in Python, complete the following steps:

  1. Check that the list of OpenFaaS functions is empty, using the following command:

    faas-cli list
    

    The output is similar to the following:

    Function                       Invocations     Replicas
    
  2. We can now start to write our very first OpenFaaS function in Python. We’ll start by creating a new folder in the Ubuntu VM using the following command:

    mkdir -p ~/functions && cd ~/functions
    
  3. Scaffold a new Python function using faas-cli, as follows:

    faas-cli new --lang python hello-python
    

    This command creates the following files:

    hello-python/handler.py
    hello-python/requirements.txt
    hello-python.yml
    
  4. Edit handler.py and add the following lines:

    def handle(req):
        print("Hello! You said: " + req)
    
  5. Build the serverless function using the following command:

    faas-cli build -f ./hello-python.yml
    

    The output is similar to the following:

    [0] > Building hello-python.
    ....
    Successfully built 9968b63b7ded
    Successfully tagged judehung/hello-python:latest
    Image: judehung/hello-python:latest built.
    [0] < Building hello-python done.
    [0] worker done.
    

    This command builds your function as a local Docker image, which you can see using the following docker command:

    docker images | grep hello-python
    

    The output is similar to the following:

    hello-python        latest       e0344b26305f     one minute ago
    
  6. Upload the function to the remote registry using the following command:

    faas-cli push -f ./hello-python.yml
    

    The output is similar to the following:

    [0] > Pushing hello-python [judehung/hello-python:latest].
    ...
    [0] < Pushing hello-python [judehung/hello-python:latest] done.
    [0] worker done.
    
  7. Deploy the function using the following command:

    faas-cli deploy -f ./hello-python.yml
    

    The output is similar to the following:

    Deploying: hello-python.
    WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
    
    Deployed. 202 Accepted.
    URL: http://10.0.2.15:8081/function/hello-python
    
  8. Test your function using one of the following commands:

    curl 10.0.2.15:8081/function/hello-python -d "Edge Xpert rocks!"
    

    Or:

    echo "Edge Xpert rocks!" | faas-cli invoke hello-python
    

    The output from both commands is similar to the following:

    Hello! You said: Edge Xpert rocks!
    
  9. Register the hello-python function as an Edge Xpert export client, so that any new event received by core-data triggers the hello-python invocation, using the following command:

    curl http://localhost:48071/api/v1/registration -d \
    '{"name":"MyServerlessHelloREST","addressable":{"name":"Hello","protocol":"HTTP","address":"10.0.2.15","port":8081,"path":"/function/hello-python","method":"POST"},"format":"JSON","enable":true,"destination":"REST_ENDPOINT"}'
    
  10. Add a pair of valuedescriptors using the following commands:

    curl http://localhost:48080/api/v1/valuedescriptor -d \
    '{"name":"temperature","min":"-40","max":"140","type":"F","uomLabel":"degree cel","defaultValue":"0","formatting":"%s","labels":["temp","hvac"]}'
    
curl http://localhost:48080/api/v1/valuedescriptor -d \
'{"name":"humidity","min":"0","max":"100","type":"F","uomLabel":"per","defaultValue":"0","formatting":"%s","labels":["humidity","hvac"]}'
  1. Send a test event to trigger the hello-python invocation using the following command:

    curl http://localhost:48080/api/v1/event -d \
    '{"origin":1471806386919,"device":"livingroomthermostat","readings":[{"origin":1471806386919,"name":"temperature","value":"72"}, {"origin":1471806386919,"name":"humidity","value":"58"}]}'
    
  2. Check the list of OpenFaaS functions again, using the following command:

    faas-cli list
    

The output is similar to the following:

      Function                              Invocations     Replicas
      hello-python                          2               1

The result lists your ``hello-python function`` and shows that it has been invoked twice.