{"id":3699,"date":"2023-08-23T12:04:02","date_gmt":"2023-08-23T17:04:02","guid":{"rendered":"https:\/\/fgiasson.com\/blog\/?p=3699"},"modified":"2023-08-23T12:04:07","modified_gmt":"2023-08-23T17:04:07","slug":"how-to-deploy-hugging-face-models-in-a-docker-container","status":"publish","type":"post","link":"https:\/\/fgiasson.com\/blog\/index.php\/2023\/08\/23\/how-to-deploy-hugging-face-models-in-a-docker-container\/","title":{"rendered":"How to Deploy Hugging Face Models in a Docker Container"},"content":{"rendered":"\n<p>In this short tutorial, we will explore how <a href=\"https:\/\/huggingface.co\/\">Hugging Face<\/a> models can be deployed in a <a href=\"https:\/\/docker.com\">Docker<\/a> Container and exposed as a web service endpoint.<\/p>\n<p>The service it exposes is a translation service from English to French and French to English.<\/p>\n<p>Why someone would like to do that? Other than to learn about those specific technologies, it is a very convenient way to try and test the thousands of models that exists on Hugging Face, in a clean and isolated environment that can easily be replicated, shared or deployed elsewhere than on your local computer.<\/p>\n<p>In this tutorial, you will learn how to use <code>Docker<\/code> to create a container with all the necessary code and artifacts to load Hugging Face models and to expose them as web service endpoints using <code>Flask<\/code>.<\/p>\n<p><a href=\"https:\/\/github.com\/fgiasson\/en-fr-translation-service\">All code and configurations used to write this blog post are available in this GitHub Repository<\/a>. You simply have to clone it and to run the commands listed in this tutorial to replicate the service on your local machine.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Installing Docker<\/h2>\n\n\n\n<p>The first step is to install Docker. The easiest way is by simply installing <a href=\"https:\/\/www.docker.com\/products\/docker-desktop\/\">Docker Desktop<\/a> which is available on MacOS, Windows and Linux.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Creating the Dockerfile<\/h2>\n\n\n\n<p>The next step is to create a new <code>Git<\/code> repository where you will create a <code>Dockerfile<\/code>. The Dockerfile is where all instructions are written that tells Docker how to create the container.<\/p>\n<p>I would also strongly encourage you to install and use <a href=\"https:\/\/github.com\/hadolint\/hadolint\">hadolint<\/a>, which is a really good Docker linter that helps people to follow Docker best practices. There is also a <a href=\"https:\/\/marketplace.visualstudio.com\/items?itemName=exiasr.hadolint\">plugin for VS Code<\/a> if this is what you use as you development IDE.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Base image and key installs<\/h3>\n\n\n\n<p>The first thing you define in a Dockerfile is the base image to use to initialize the container. For this tutorial, we will use Ubuntu&#8217;s latest LTS:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code># Use Ubuntu&#39;s current LTS\nFROM ubuntu:jammy-20230804<\/code><\/pre><\/div>\n\n\n\n<p>Since we are working to create a Python web service that expose the predictions of a ML model, the next step is to add they key pieces required for the Python service. Let&#8217;s make sure that you only include what is necessary to minimize the size, and complexity, of the container as much as possible:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code># Make sure to not install recommends and to clean the \n# install to minimize the size of the container as much as possible.\nRUN apt-get update && \\\n    apt-get install --no-install-recommends -y python3=3.10.6-1~22.04 && \\\n    apt-get install --no-install-recommends -y python3-pip=22.0.2+dfsg-1ubuntu0.3 && \\\n    apt-get install --no-install-recommends -y python3-venv=3.10.6-1~22.04 && \\\n    apt-get clean && \\\n    rm -rf \/var\/lib\/apt\/lists\/*<\/code><\/pre><\/div>\n\n\n\n<p>This instruct Docker to install <code>Python3<\/code>, <code>pip<\/code> and <code>venv<\/code>. It also ensures that apt get cleaned of cached files, that nothing more is installed and that we define the exact version of the package we want to install. That is to ensure that we minimize the size of the container, while making sure that the container can easily be reproduced, with the exact same codebase, any time in the future.<\/p>\n<p>Another thing to note: we run multiple commands with a single <code>RUN<\/code> instruction by piping them together with <code>&amp;&amp;<\/code>. This is to minimize the number of layers created by Docker for the container, and this is a best practice to follow when creating containers. If you don&#8217;t do this and run <code>hadolint<\/code>, then you will get warning suggesting you to refactor your <code>Dockerfile<\/code> accordingly.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Copy required files<\/h3>\n\n\n\n<p>Now that the base operating system is installed, the next step is to install all the requirements of the Python project we want to deploy in the container:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code># Set the working directory within the container\nWORKDIR \/app\n\n# Copy necessary files to the container\nCOPY requirements.txt .\nCOPY main.py .\nCOPY download_models.py .<\/code><\/pre><\/div>\n\n\n\n<p>First we define the working directory with the <code>WORKDIR<\/code> instruction. From now on, every other instruction will run from that directory in the container. We copy the local files: <code>requirements.txt<\/code>, <code>main.py<\/code> and <code>download_models.py<\/code> to the working directory.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Create virtual environment<\/h3>\n\n\n\n<p>Before doing anything with those files, we are better creating a virtual environment where to install all those dependencies. Some people may wonder why we create an environment within an environment? It is further isolation between the container and the Python application to make sure that there is no possibility of dependencies clashes. This is a good best practice to adopt.<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code># Create a virtual environment in the container\nRUN python3 -m venv .venv\n\n# Activate the virtual environment\nENV PATH=&quot;\/app\/.venv\/bin:$PATH&quot;<\/code><\/pre><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Install application requirements<\/h3>\n\n\n\n<p>Once the virtual environment is created and activated in the container, the next step is to install all the required dependencies in that new environment:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code>    # Install Python dependencies from the requirements file\nRUN pip install --no-cache-dir -r requirements.txt && \\\n    # Get the models from Hugging Face to bake into the container\n    python3 download_models.py<\/code><\/pre><\/div>\n\n\n\n<p>It runs <code>pip install<\/code> to install all the dependencies listed in <code>requirements.txt<\/code>. The dependencies are:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code>transformers==4.30.2\nflask==2.3.3\ntorch==2.0.1\nsacremoses==0.0.53\nsentencepiece==0.1.99<\/code><\/pre><\/div>\n\n\n\n<p>Just like the Ubuntu package version, we should (have to!) pin (specify) the exact version of each dependency. This is the best way to ensure that we can reproduce this environment any time in the future and to prevent unexpected crashes because code changed in some downstream dependencies that causes issues with the code.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Downloading all models in the container<\/h3>\n\n\n\n<p>As you can see in the previous <code>RUN<\/code> command, the next step is to download all models and tokenizers in the working directory such that we bake the model&#8217;s artifacts directly in the container. That will ensures that we minimize the time it takes to initialize a container. We spend the time to download all those artifacts at build time instead of run time. The downside is that the containers will be much bigger depending on the models that are required.<\/p>\n<p>The <code>download_models.py<\/code> file is a utility file used to download the Hugging Face models used by the service directly into the container. The code simply download the models and tokenizer files from Hugging Face and save them locally (in the working directory of the container):<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-python\" data-lang=\"Python\"><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\nimport os\n\ndef download_model(model_path, model_name):\n    &quot;&quot;&quot;Download a Hugging Face model and tokenizer to the specified directory&quot;&quot;&quot;\n    # Check if the directory already exists\n    if not os.path.exists(model_path):\n        # Create the directory\n        os.makedirs(model_path)\n\n    tokenizer = AutoTokenizer.from_pretrained(model_name)\n    model = AutoModelForSeq2SeqLM.from_pretrained(model_name)\n\n    # Save the model and tokenizer to the specified directory\n    model.save_pretrained(model_path)\n    tokenizer.save_pretrained(model_path)\n\n# For this demo, download the English-French and French-English models\ndownload_model(&#39;models\/en_fr\/&#39;, &#39;Helsinki-NLP\/opus-mt-en-fr&#39;)\ndownload_model(&#39;models\/fr_en\/&#39;, &#39;Helsinki-NLP\/opus-mt-fr-en&#39;)<\/code><\/pre><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Creating the Flask translation web service endpoint<\/h3>\n\n\n\n<p>The last thing we have to do with the Dockerfile is to expose the port where the web service will be available and to tell the container what to run when it starts:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code># Make port 6000 available to the world outside this container\nEXPOSE 6000\n\nENTRYPOINT [ &quot;python3&quot; ]\n\n# Run main.py when the container launches\nCMD [ &quot;main.py&quot; ]<\/code><\/pre><\/div>\n\n\n\n<p>We expose the port <code>6000<\/code> to the outside world, and we tell Docker to run the <code>python3<\/code> command with <code>main.py<\/code>. The <code>main.py<\/code> file is a very simple file that register the web service&#8217;s path using Flask, and that makes the predictions (translations in this case):<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-python\" data-lang=\"Python\"><code>from flask import Flask, request, jsonify\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n\ndef get_model(model_path):\n    &quot;&quot;&quot;Load a Hugging Face model and tokenizer from the specified directory&quot;&quot;&quot;\n    tokenizer = AutoTokenizer.from_pretrained(model_path)\n    model = AutoModelForSeq2SeqLM.from_pretrained(model_path)\n    return model, tokenizer\n\n# Load the models and tokenizers for each supported language\nen_fr_model, en_fr_tokenizer = get_model(&#39;models\/en_fr\/&#39;)\nfr_en_model, fr_en_tokenizer = get_model(&#39;models\/fr_en\/&#39;)\n\napp = Flask(__name__)\n\ndef is_translation_supported(from_lang, to_lang):\n    &quot;&quot;&quot;Check if the specified translation is supported&quot;&quot;&quot;\n    supported_translations = [&#39;en_fr&#39;, &#39;fr_en&#39;]\n    return f&#39;{from_lang}_{to_lang}&#39; in supported_translations\n\n@app.route(&#39;\/translate\/&lt;from_lang&gt;\/&lt;to_lang&gt;\/&#39;, methods=[&#39;POST&#39;])\ndef translate_endpoint(from_lang, to_lang):\n    &quot;&quot;&quot;Translate text from one language to another. This function is \n    called when a POST request is sent to \/translate\/&lt;from_lang&gt;\/&lt;to_lang&gt;\/&quot;&quot;&quot;\n    if not is_translation_supported(from_lang, to_lang):\n        return jsonify({&#39;error&#39;: &#39;Translation not supported&#39;}), 400\n\n    data = request.get_json()\n    from_text = data.get(f&#39;{from_lang}_text&#39;, &#39;&#39;)\n\n    if from_text:\n        model = None\n        tokenizer = None\n\n        match from_lang:\n            case &#39;en&#39;:        \n                model = en_fr_model\n                tokenizer = en_fr_tokenizer\n            case &#39;fr&#39;:\n                model = fr_en_model\n                tokenizer = fr_en_tokenizer\n\n        to_text = tokenizer.decode(model.generate(tokenizer.encode(from_text, return_tensors=&#39;pt&#39;)).squeeze(), skip_special_tokens=True)\n\n        return jsonify({f&#39;{to_lang}_text&#39;: to_text})\n    else:\n        return jsonify({&#39;error&#39;: &#39;Text to translate not provided&#39;}), 400\n    \nif __name__ == &#39;__main__&#39;:\n    app.run(host=&#39;0.0.0.0&#39;, port=6000, debug=True)\n<\/code><\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Building the container<\/h2>\n\n\n\n<p>Now that the <code>Dockerfile<\/code> is completed, the next step is to use it to have Docker to build the actual image of the container. This is done using this command in the terminal:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-bash\" data-lang=\"Bash\"><code>docker build -t localbuild:en_fr_translation_service .<\/code><\/pre><\/div>\n\n\n\n<p>Note that we specified a tag to make it easier to manage it in between all the other images that may exists in the environment. The output of the terminal will show every step defined in the <code>Dockerfile<\/code>, and the processing for each of those step. The final output looks like:<\/p>\n<p><a href=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/build_output.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/build_output.jpg\" alt=\"\" width=\"1948\" height=\"674\" class=\"alignnone size-full wp-image-3711\" srcset=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/build_output.jpg 1948w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/build_output-300x104.jpg 300w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/build_output-1024x354.jpg 1024w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/build_output-768x266.jpg 768w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/build_output-1536x531.jpg 1536w\" sizes=\"auto, (max-width: 1948px) 100vw, 1948px\" \/><\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Running and Querying the service<\/h2>\n\n\n\n<p>Now that we have a brand new image, the next step is to test it. In this section, I will use Docker Desktop&#8217;s user interface to show how we can easily do this, but all those step can easily be done (and automated) using the <code>docker<\/code> command line application.<\/p>\n<p>After you built the image, it will automatically appear in the <code>images<\/code> section of Docker Desktop:<\/p>\n<p><a href=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/images.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/images.jpg\" alt=\"\" width=\"2036\" height=\"1190\" class=\"alignnone size-full wp-image-3714\" srcset=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/images.jpg 2036w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/images-300x175.jpg 300w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/images-1024x599.jpg 1024w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/images-768x449.jpg 768w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/images-1536x898.jpg 1536w\" sizes=\"auto, (max-width: 2036px) 100vw, 2036px\" \/><\/a><\/p>\n<p><\/p>\n\n\n\n<p>You can see the tag of the image, its size, when it was created, etc. To start the container from that image, we simply have to click the <code>play arrow<\/code> in the <code>Actions<\/code> column. That will start running a new container using that image.<\/p>\n<p>Docker Desktop will enable you to add some more parameter to start the container with the following window:<\/p>\n<p><a href=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/run_container.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/run_container.jpg\" alt=\"\" width=\"405\" height=\"408\" class=\"wp-image-3715 aligncenter\" srcset=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/run_container.jpg 1054w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/run_container-298x300.jpg 298w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/run_container-1016x1024.jpg 1016w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/run_container-150x150.jpg 150w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/run_container-768x774.jpg 768w\" sizes=\"auto, (max-width: 405px) 100vw, 405px\" \/><\/a><\/p>\n<p><\/p>\n<p>The most important thing to define here is to <code>Host port<\/code>. If you leave it empty, then the <code>port 6000<\/code> we exposed in the Docker file will become <code>unbound<\/code> and we won&#8217;t be able to reach the service running in the container.<\/p>\n\n\n\n<p>Once you click the <code>Run<\/code> button, the container will appear in the <code>Containers<\/code> section:<\/p>\n<p><a href=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/containers.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/containers.jpg\" alt=\"\" width=\"2022\" height=\"1184\" class=\"alignnone size-full wp-image-3713\" srcset=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/containers.jpg 2022w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/containers-300x176.jpg 300w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/containers-1024x600.jpg 1024w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/containers-768x450.jpg 768w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/containers-1536x899.jpg 1536w\" sizes=\"auto, (max-width: 2022px) 100vw, 2022px\" \/><\/a><\/p>\n<p><\/p>\n<p>And if you click on it&#8217;s name&#8217;s link, you will have access to the internal of the container (the files it contains, the execution logs, etc.:<\/p>\n<p><a href=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/container_details.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/container_details.jpg\" alt=\"\" width=\"2028\" height=\"1190\" class=\"alignnone size-full wp-image-3712\" srcset=\"https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/container_details.jpg 2028w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/container_details-300x176.jpg 300w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/container_details-1024x601.jpg 1024w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/container_details-768x451.jpg 768w, https:\/\/fgiasson.com\/blog\/wp-content\/uploads\/2023\/08\/container_details-1536x901.jpg 1536w\" sizes=\"auto, (max-width: 2028px) 100vw, 2028px\" \/><\/a><\/p>\n<p><\/p>\n<p><\/p>\n\n\n\n<p>Now that the container is running, we can query the endpoint like this:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-bash\" data-lang=\"Bash\"><code>curl http:\/\/localhost:6000\/translate\/en\/fr\/ POST -H &quot;Content-Type: application\/json&quot; -v -d &#39;{&quot;en_text&quot;: &quot;Towards Certification of Machine Learning-Based Distributed Systems Behavior&quot;}&#39;<\/code><\/pre><\/div>\n\n\n\n<p>It returns:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-json\" data-lang=\"JSON\"><code>{\n  &quot;fr_text&quot;: &quot;Vers la certification des syst\\u00e8mes distribu\\u00e9s fond\\u00e9s sur l&#39;apprentissage automatique&quot;\n}<\/code><\/pre><\/div>\n\n\n\n<p>And then for the French to English translation:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-plain\"><code>curl http:\/\/localhost:6000\/translate\/fr\/en\/ POST -H &quot;Content-Type: application\/json&quot; -v -d &#39;{&quot;fr_text&quot;: &quot;Ce qu&#39;\\&#39;&#39;il y a d&#39;\\&#39;&#39;admirable dans le bonheur des autres, c&#39;\\&#39;&#39;est qu&#39;\\&#39;&#39;on y croit.&quot;}&#39;<\/code><\/pre><\/div>\n\n\n\n<p>It returns:<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-json\" data-lang=\"JSON\"><code>{\n  &quot;en_text&quot;: &quot;What is admirable in the happiness of others is that one believes in it.&quot;\n}<\/code><\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>As we can see, it is pretty straightforward to create simple Docker containers that turns pretty much any Hugging Face pre-trained models into a web service endpoint.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this short tutorial, we will explore how Hugging Face models can be deployed in a Docker Container and exposed as a web service endpoint. The service it exposes is a translation service from English to French and French to English. Why someone would like to do that? Other than to learn about those specific [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[293,309],"tags":[310,311,312],"class_list":["post-3699","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-mlops","tag-docker","tag-huggingface","tag-mlops"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/3699","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=3699"}],"version-history":[{"count":14,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/3699\/revisions"}],"predecessor-version":[{"id":3718,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/3699\/revisions\/3718"}],"wp:attachment":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=3699"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=3699"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=3699"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}