# Copy the email address on which you want to receive notifications\n# Then create the faasd secret\n$ pbpaste | faas-cli secret create notification-email\n\n# Reminder: The secrets are stored on the faasd instance at:\n# \/var\/lib\/faasd-provider\/secrets\n<\/code><\/pre>\nSetting up the RQ worker on the faasd instance<\/h3>\n I am going to be lazy here and just install the RQ worker directly on the Ubuntu instance (multipass) that is running faasd etc.<\/p>\n
Why? Mainly because I want to get this learning project finished and secondly because I currently lack the knowledge of how to set this up in the best manner. My guess is something like cloud-init could take care of all of this.<\/p>\n
Install dependencies including RQ.<\/p>\n
# On the faasd instance (multipass vm)\nubuntu@faasd:~$ pwd\n\/home\/ubuntu\n\nubuntu@faasd:~$ python3 --version\nPython 3.8.5\n\n# Install pip and venv\nubuntu@faasd:~$ sudo apt install python3-pip\nubuntu@faasd:~$ pip3 --version\npip 20.0.2 from \/usr\/lib\/python3\/dist-packages\/pip (python 3.8)\n\nubuntu@faasd:~$ sudo apt install python3-venv\n\n# Create a new virtual environment\nubuntu@faasd:~$ mkdir -p deps\nubuntu@faasd:~$ cd deps\nubuntu@faasd:~\/deps$ python3 -m venv .\/venv\nubuntu@faasd:~\/deps$ source venv\/bin\/activate\n(venv) ubuntu@faasd:~\/deps$\n\n# Install RQ\n(venv) ubuntu@faasd:~\/deps$ pip3 install rq\n...\nInstalling collected packages: redis, click, rq\nSuccessfully installed click-7.1.2 redis-3.5.3 rq-1.7.0\n\n# Freeze requirements\n(venv) ubuntu@faasd:~\/deps$ pip3 freeze > requirements.txt\n(venv) ubuntu@faasd:~\/deps$ cat requirements.txt\nclick==7.1.2\nredis==3.5.3\nrq==1.7.0\n<\/code><\/pre>\nStart a worker. The function’s legodb.yml will be using a queue name of "queue-name: everything-is-awesome" and redis URL "redis-url: redis:\/\/10.62.0.1:9988\/0"<\/p>\n
(venv) ubuntu@faasd:~\/deps$ rq worker -u 'redis:\/\/10.62.0.1:9988\/0' everything-is-awesome\n11:20:56 Worker rq:worker:110ee88ebe134e14a707e64de4c34ef1: started, version 1.7.0\n11:20:56 Subscribing to channel rq:pubsub:110ee88ebe134e14a707e64de4c34ef1\n11:20:56 *** Listening on everything-is-awesome...\n11:20:56 Cleaning registries for queue: everything-is-awesome\n<\/code><\/pre>\nDeploy the latest version of the function and test<\/p>\n
$ .\/up.sh\n$ curl -X PUT http:\/\/192.168.64.4:9999\/function\/legodb\/legosets-download-images\n# This worked\n\n# However checking the output from the RQ worker\n...\nModuleNotFoundError: No module named 'function'\n<\/code><\/pre>\nSomething I kept wondering about while playing around with RQ was how did it know how to run the actual Python code. My assumption was that there was some clever thing going on when you enqueued a task that some how the code got reflected, serialized, stored in Redis and then the worker would pick up a task, deserialize the Python and invoke it. But how would dependencies etc. work?<\/p>\n
Lol, I should have known it is not going to be that complex or clever. I am actually glad it turns out to be the most easy to explain scenario.<\/p>\n
# On my local Mac where RQ worker worked no issues\n$ cd ~\/some-other-dir-where-the-lego-code-does-not-exist\n\n# Spin up the worker\n$ rq worker -u 'redis:\/\/192.168.64.4:9988\/0' johnny5\n\n# Run the testing code that adds a task ...\nModuleNotFoundError: No module named 'legodb'\n<\/code><\/pre>\nOk that confirms it, the RQ worker will need to have access to the same source code in order to run the tasks.<\/p>\n
Right, so now I have the issue that I would have to deploy the same code as what is being used by my function to also be available where ever the RQ worker(s) live and keep them in sync.<\/p>\n
This is way to much of a **** ache for what I want to accomplish in this learning project. So instead I am going to cop out and just run the worker on my local development setup.<\/p>\n
However this still achieves my overall goal, and that is to get out of my comfort zone and learn new things.<\/p>\n
Although this feels like a big fail, it is win at the same time because I have made some key discoveries here.<\/p>\n
\nfaasd uses containerd (not actually Docker)<\/li>\n docker-compose.yaml spins up containers used by OpenFaaS sub components<\/li>\n Your functions access services using the IP 10.62.0.1 and the "external" port<\/li>\n RQ workers need access to the Python code that they will need to run<\/li>\n<\/ul>\nTroubleshooting the Redis connection<\/h3>\n TL;DR; The port turned out to be 9988 for the function to connect to Redis as well.<\/p>\n
Conclusion:<\/strong> The "external" port specified in docker-compose.yaml is also the port that your functions will need to use. For example my redis setup stated this "9988:6379". The external port is 9988 which is exposed not only out to the local network but also to other containers (where the functions are running).<\/p>\nOf course!<\/strong> All that docker-compose.yaml is doing is spinning up a different container for Redis and this is not the same container in which the function will be running.<\/p>\nBelow is a journey I took in trying to figure this out.<\/p>\n
$ curl -X PUT http:\/\/192.168.64.4:9999\/function\/legodb\/legosets-download-images\n...\n<title>500 Internal Server Error<\/title>\n\n$ faas-cli logs legodb\n...\n2021-03-29T09:10:41Z 2021\/03\/29 09:10:41 stderr: raise ConnectionError(self._error_message(e))\n2021-03-29T09:10:41Z 2021\/03\/29 09:10:41 stderr: redis.exceptions.ConnectionError: Error 111 connecting to 10.62.0.1:6379. Connection refused.\n<\/code><\/pre>\nLooks like the function can’t make a connection to the Redis service at "10.62.0.1:6379".<\/p>\n
Time to hit the books and google again.<\/p>\n
Is the IP addresses correct?<\/p>\n
# On the faasd instance (multipass vm)\nubuntu@faasd:~$ ifconfig\n...\nenp0s2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500\n inet 192.168.64.4 netmask 255.255.255.0 broadcast 192.168.64.255\n...\nopenfaas0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500\n inet 10.62.0.1 netmask 255.255.0.0 broadcast 10.62.255.255\n<\/code><\/pre>\nOk the IP address on my local network is correct as well as the openfaas0<\/code> bridge. According to the faasd<\/code> documentation and ebook the IP 10.62.0.1<\/code> is the one exposed to the functions and all the core services can be reached by that IP address.<\/p>\nLet me check the Redis setup.<\/p>\n
ubuntu@faasd:~$ sudo nano \/var\/lib\/faasd\/docker-compose.yaml\n...\n redis:\n ports:\n - "9988:6379"\n<\/code><\/pre>\nOk I know that Redis can be reached from my Mac on port 9988. Perhaps 9988 should also be used by the function. My assumption here was that 6379 is the internal port on 10.62.0.1.<\/p>\n
Instead of just changing the environment variable and redeploying the function I want to first see if I can explore this a bit more.<\/p>\n
# Local Mac\n# It is handy to deploy the nodeinfo function on faasd and keep it around\n$ echo verbose | faas-cli invoke nodeinfo\n...\neth1: [\n {\n address: '10.62.0.28',\n netmask: '255.255.0.0',\n family: 'IPv4',\n mac: '52:9c:57:20:b7:08',\n internal: false,\n cidr: '10.62.0.28\/16'\n },\n\n# Ok looks like 10.62.0.28 is the IP address used by the container running the nodeinfo function.\n\n# On the faasd instance\nubuntu@faasd:~$ netstat -tuna\n# Install netstat using: sudo apt install net-tools\n...\ntcp 0 0 10.62.0.1:47556 10.62.0.26:6379 ESTABLISHED\ntcp 0 0 10.62.0.1:47554 10.62.0.26:6379 ESTABLISHED\ntcp6 0 0 :::9988 :::* LISTEN\ntcp6 0 0 192.168.64.4:9988 192.168.64.1:57559 ESTABLISHED\n<\/code><\/pre>\nOk so port 6379 is being used by 10.62.0.26 and I am going to guess that it is the Redis container.<\/p>\n
Checking logs<\/p>\n
# Check the logs of a core service (or anything you added to docker-compose.yml)\nubuntu@faasd:~$ journalctl -t openfaas:redis\n\n# Let's see what happens on the gateway when we invoke our function\nubuntu@faasd:~$ journalctl -f -t openfaas:gateway\nMar 29 10:40:37 faasd openfaas:gateway[16070]: 2021\/03\/29 09:40:37 GetReplicas [legodb.openfaas-fn] took: 0.032137s\nMar 29 10:40:37 faasd openfaas:gateway[16070]: 2021\/03\/29 09:40:37 GetReplicas [legodb.openfaas-fn] took: 0.032228s\nMar 29 10:40:37 faasd openfaas:gateway[16070]: 2021\/03\/29 09:40:37 Forwarded [PUT] to \/function\/legodb\/legosets-download-images - [500] - 0.402113s seconds\n<\/code><\/pre>\nOh what the heck lets just change the port in our environment variable and redeploy.<\/p>\n
Voila! that worked. If you made it this far, read the TL;DR section ^^ where I make the obvious connection 🤦♂️<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"
Photo by the blowup on Unsplash […]<\/p>\n
Read more →<\/a><\/p>\n","protected":false},"author":2,"featured_media":328,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[37],"tags":[36,23],"yoast_head":"\n100 Days of Learning: Day 20 & 21 \u2013 You win some, you lose some - Andr\u00e9 Jacobs<\/title>\n \n \n \n \n \n \n \n \n \n \n \n \n\t \n\t \n\t \n \n \n \n \n \n\t \n\t \n\t \n