<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.2">Jekyll</generator><link href="https://tilrnt.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://tilrnt.github.io/" rel="alternate" type="text/html" /><updated>2025-04-05T15:42:13+00:00</updated><id>https://tilrnt.github.io/feed.xml</id><title type="html">Today I Learnt</title><subtitle>A collection of notes on things I learn on a daily basis.</subtitle><author><name>Chee Yeo 2023</name></author><entry><title type="html">Compile and install python 3.13 on Ubuntu 20.04</title><link href="https://tilrnt.github.io/ubuntu/python/2025/03/11/compile-install-python.html" rel="alternate" type="text/html" title="Compile and install python 3.13 on Ubuntu 20.04" /><published>2025-03-11T00:00:00+00:00</published><updated>2025-03-11T00:00:00+00:00</updated><id>https://tilrnt.github.io/ubuntu/python/2025/03/11/compile-install-python</id><content type="html" xml:base="https://tilrnt.github.io/ubuntu/python/2025/03/11/compile-install-python.html"><![CDATA[<p>While trying to compile and install python from source on Ubuntu 20.04, I kept hitting the following error on after running make:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Need to install the following packages:

__dbm __tkinter
</code></pre></div></div>

<p>To fix the issue, I had to install the following packages:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt get update &amp;&amp; sudo apt get install libgdbm-compat-dev tk-dev
</code></pre></div></div>

<p>To install python 3.13.2, the full set of commands become:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt get update

sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev libbz2-dev libgdbm-compat-dev tk-dev wget

wget https://www.python.org/ftp/python/3.13.0/Python-3.13.2.tgz

tar -xvf Python-3.13.2.tgz

cd Python-3.13.2

./configure --enable-optimizations

make -j 4

sudo make altinstall

python3.13 --version
</code></pre></div></div>

<p>Creating a virtual env from the new python install:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python3.13 -m venv myvenv

source myvenv/bin/activate
</code></pre></div></div>]]></content><author><name>Chee Yeo 2023</name></author><category term="ubuntu" /><category term="python" /><summary type="html"><![CDATA[While trying to compile and install python from source on Ubuntu 20.04, I kept hitting the following error on after running make:]]></summary></entry><entry><title type="html">Compile and install python 3.13 on Ubuntu 20.04</title><link href="https://tilrnt.github.io/postgresql/2025/03/11/postgresql-delete-cascade.html" rel="alternate" type="text/html" title="Compile and install python 3.13 on Ubuntu 20.04" /><published>2025-03-11T00:00:00+00:00</published><updated>2025-03-11T00:00:00+00:00</updated><id>https://tilrnt.github.io/postgresql/2025/03/11/postgresql-delete-cascade</id><content type="html" xml:base="https://tilrnt.github.io/postgresql/2025/03/11/postgresql-delete-cascade.html"><![CDATA[<p>In a recent project, I came across an issue of orphaned child entries in a database table after the parent entry is deleted. This is because the SQL schema doesn’t have the <code class="language-plaintext highlighter-rouge">ON DELETE CASCADE</code> statement after the <code class="language-plaintext highlighter-rouge">REFERENCES</code> statement in the child table.</p>

<p>The new SQL statements for tables creation becomes:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>CREATE TABLE IF NOT EXISTS chats (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255),
    token_count INT DEFAULT 0,
    created_at TIMESTAMPTZ,
    updated_at TIMESTAMPTZ
);

CREATE TABLE IF NOT EXISTS chathistory (
    id SERIAL PRIMARY KEY,
    role VARCHAR(255),
    message TEXT,
    model_name VARCHAR(255),
    files TEXT[],
    created_at TIMESTAMPTZ,
    chat_id INT,
    FOREIGN KEY(chat_id) REFERENCES chats(id) ON DELETE CASCADE
);
</code></pre></div></div>

<p>The above ensures that whenever a chat entry is deleted, the associated child entries in the chathistory table are also removed due to the delete cascade statement.</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="postgresql" /><summary type="html"><![CDATA[In a recent project, I came across an issue of orphaned child entries in a database table after the parent entry is deleted. This is because the SQL schema doesn’t have the ON DELETE CASCADE statement after the REFERENCES statement in the child table.]]></summary></entry><entry><title type="html">Install Vault in docker image</title><link href="https://tilrnt.github.io/vault/docker/2024/11/01/install-vault-in-docker.html" rel="alternate" type="text/html" title="Install Vault in docker image" /><published>2024-11-01T00:00:00+00:00</published><updated>2024-11-01T00:00:00+00:00</updated><id>https://tilrnt.github.io/vault/docker/2024/11/01/install-vault-in-docker</id><content type="html" xml:base="https://tilrnt.github.io/vault/docker/2024/11/01/install-vault-in-docker.html"><![CDATA[<p>To install the <code class="language-plaintext highlighter-rouge">vault</code> client in a docker image, we can follow the <a href="https://developer.hashicorp.com/vault/install">Official Vault install documentation</a>.</p>

<p>However, this throws an error of <code class="language-plaintext highlighter-rouge">operation not permitted</code> when trying to run the vault client. This is due to vault trying to lock memory to prevent sensitive values being swapped to disk. This is a <a href="https://github.com/hashicorp/vault/issues/10048">Reported Issue</a> and also mentioned on <a href="https://hub.docker.com/r/hashicorp/vault">Vault docker hub</a>.</p>

<p>We can overcome this by running the container with <code class="language-plaintext highlighter-rouge">--cap-add</code> flag:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run --cap-add=IPC_LOCK -d --name=dev-vault mycustomimage vault
</code></pre></div></div>

<p>However, this is not required if using <code class="language-plaintext highlighter-rouge">Raft integrated storage</code>.</p>

<p>To resolve the issue, we can instead use a multistage build to copy the vault binary into the image rather than installing via package manager.</p>

<p>An example Dockerfile:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>FROM hashicorp/vault:1.18 as vaultsource

FROM ubuntu:22.04 as base

COPY --from=vaultsource /bin/vault /usr/local/bin/vault
</code></pre></div></div>

<p>Using the above, we can create a custom image with a vault CLI that doesn’t throw an operation not permitted error.</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="vault" /><category term="docker" /></entry><entry><title type="html">Expired GH client GPG keys</title><link href="https://tilrnt.github.io/ubuntu/gh/2024/09/08/gh-client-apt-error.html" rel="alternate" type="text/html" title="Expired GH client GPG keys" /><published>2024-09-08T00:00:00+00:00</published><updated>2024-09-08T00:00:00+00:00</updated><id>https://tilrnt.github.io/ubuntu/gh/2024/09/08/gh-client-apt-error</id><content type="html" xml:base="https://tilrnt.github.io/ubuntu/gh/2024/09/08/gh-client-apt-error.html"><![CDATA[<p>When running <code class="language-plaintext highlighter-rouge">sudo apt update</code> recently, I received an error when trying to update the gh client:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell">W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cli.github.com/packages stable InRelease: The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI &lt;opensource+cli@github.com&gt;
W: Failed to fetch https://cli.github.com/packages/dists/stable/InRelease  The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI &lt;opensource+cli@github.com&gt;
W: Some index files failed to download. They have been ignored, or old ones used instead.</code></pre></figure>

<p>This is due to the GPG key used to verify the .deb and .rpm repository expiring on the 6th September 2024. This is reported as a <a href="https://github.com/cli/cli/issues/9569">GH client install issue</a>.</p>

<p>To resolve it, one can run the provided script which downloads and reinstalls the new GPG key, fixing the error above:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="c"># Check for wget, if not installed, install it</span>
<span class="o">(</span><span class="nb">type</span> <span class="nt">-p</span> wget <span class="o">&gt;</span>/dev/null <span class="o">||</span> <span class="o">(</span><span class="nb">sudo </span>apt update <span class="o">&amp;&amp;</span> <span class="nb">sudo </span>apt-get <span class="nb">install </span>wget <span class="nt">-y</span><span class="o">))</span> <span class="se">\</span>
    <span class="o">&amp;&amp;</span> <span class="nb">sudo mkdir</span> <span class="nt">-p</span> <span class="nt">-m</span> 755 /etc/apt/keyrings

<span class="c"># Set keyring path based on existence of /usr/share/keyrings/githubcli-archive-keyring.gpg</span>
<span class="c"># If it is in the old location, use that, otherwise always use the new location.</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> /usr/share/keyrings/githubcli-archive-keyring.gpg <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">keyring_path</span><span class="o">=</span><span class="s2">"/usr/share/keyrings/githubcli-archive-keyring.gpg"</span>
<span class="k">else
    </span><span class="nv">keyring_path</span><span class="o">=</span><span class="s2">"/etc/apt/keyrings/githubcli-archive-keyring.gpg"</span>
<span class="k">fi

</span><span class="nb">echo</span> <span class="s2">"replacing keyring at </span><span class="k">${</span><span class="nv">keyring_path</span><span class="k">}</span><span class="s2">"</span>

<span class="c"># Download and set up the keyring</span>
wget <span class="nt">-qO-</span> https://cli.github.com/packages/githubcli-archive-keyring.gpg | <span class="nb">sudo tee</span> <span class="s2">"</span><span class="nv">$keyring_path</span><span class="s2">"</span> <span class="o">&gt;</span> /dev/null <span class="se">\</span>
    <span class="o">&amp;&amp;</span> <span class="nb">sudo chmod </span>go+r <span class="s2">"</span><span class="nv">$keyring_path</span><span class="s2">"</span>

<span class="c"># Idempotently add the GitHub CLI repository as an apt source</span>
<span class="nb">echo</span> <span class="s2">"deb [arch=</span><span class="si">$(</span>dpkg <span class="nt">--print-architecture</span><span class="si">)</span><span class="s2"> signed-by=</span><span class="nv">$keyring_path</span><span class="s2">] https://cli.github.com/packages stable main"</span> | <span class="nb">sudo tee</span> /etc/apt/sources.list.d/github-cli.list <span class="o">&gt;</span> /dev/null

<span class="c"># Update the package lists, which should now pass</span>
<span class="nb">sudo </span>apt update</code></pre></figure>

<p>By running the above, I was able to run apt update and apt upgrade again.</p>

<p>Hope it helps!</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="ubuntu" /><category term="gh" /></entry><entry><title type="html">RVM on Ubuntu 22.04 Jellyfish</title><link href="https://tilrnt.github.io/rvm/ubuntu/2024/04/04/rvm-ubuntu-install.html" rel="alternate" type="text/html" title="RVM on Ubuntu 22.04 Jellyfish" /><published>2024-04-04T00:00:00+00:00</published><updated>2024-04-04T00:00:00+00:00</updated><id>https://tilrnt.github.io/rvm/ubuntu/2024/04/04/rvm-ubuntu-install</id><content type="html" xml:base="https://tilrnt.github.io/rvm/ubuntu/2024/04/04/rvm-ubuntu-install.html"><![CDATA[<p>I had to setup RVM recently on my Ubuntu 22.04 desktop. However, the original installation instructions were plagued with issues, namely with openssl version on Ubuntu 22.04 which caused conflicts with the ruby installation.</p>

<p>To fix this issue we need to:</p>
<ul>
  <li>Install the Ubuntu version of RVM</li>
  <li>Install an older openssl version as a package in rvm</li>
  <li>Reference the above openssl package during the rvm install</li>
</ul>

<p>Firstly, I had to install <a href="https://github.com/rvm/ubuntu_rvm">RVM Ubuntu</a>. I followed the original instructions in the README. Ensure that any existing RVM installations are removed first.</p>

<p>After RVM is installed properly, to install ruby 3.0.0, these were the steps I took:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell">rvm pkg <span class="nb">install </span>openssl

rvm <span class="nb">install </span>ruby-3.0.0 <span class="nt">--with-openssl-dir</span><span class="o">=</span>/usr/share/rvm/usr</code></pre></figure>

<p>Only the above steps allowed me to get RVM to work.</p>

<p>Hope it helps someone!</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="rvm" /><category term="ubuntu" /></entry><entry><title type="html">Python 3.12 ‘pkgutil has no attribute ImpImporter’ error</title><link href="https://tilrnt.github.io/python/pip/venv/2024/03/16/python-venv-pip-errors.html" rel="alternate" type="text/html" title="Python 3.12 ‘pkgutil has no attribute ImpImporter’ error" /><published>2024-03-16T00:00:00+00:00</published><updated>2024-03-16T00:00:00+00:00</updated><id>https://tilrnt.github.io/python/pip/venv/2024/03/16/python-venv-pip-errors</id><content type="html" xml:base="https://tilrnt.github.io/python/pip/venv/2024/03/16/python-venv-pip-errors.html"><![CDATA[<p>While creating and using a virtual env created with python 3.12, I installed a package which resulted in <code class="language-plaintext highlighter-rouge">pip</code> throwing an error of:</p>

<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="nb">AttributeError</span><span class="p">:</span> <span class="n">module</span> <span class="s">'pkgutil'</span> <span class="n">has</span> <span class="n">no</span> <span class="n">attribute</span> <span class="s">'ImpImporter'</span><span class="p">.</span> <span class="n">Did</span> <span class="n">you</span> <span class="n">mean</span><span class="p">:</span> <span class="s">'zipimporter'</span><span class="err">?</span></code></pre></figure>

<p>This occured after installing <code class="language-plaintext highlighter-rouge">setuptools</code> which was a dependency of another package. As a result, I was unable to use <code class="language-plaintext highlighter-rouge">pip</code> itself to remove setuptools.</p>

<p>Rather than re-create a new venv, the only way to to resolve this issue is to reinstall pip by downloading the pip install file from <code class="language-plaintext highlighter-rouge">https://bootstrap.pypa.io/get-pip.py</code> and running it again:</p>

<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">curl</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">bootstrap</span><span class="p">.</span><span class="n">pypa</span><span class="p">.</span><span class="n">io</span><span class="o">/</span><span class="n">get</span><span class="o">-</span><span class="n">pip</span><span class="p">.</span><span class="n">py</span> <span class="o">-</span><span class="n">O</span> <span class="n">get</span><span class="o">-</span><span class="n">pip</span><span class="p">.</span><span class="n">py</span>

<span class="n">python</span> <span class="n">get</span><span class="o">-</span><span class="n">pip</span><span class="p">.</span><span class="n">py</span></code></pre></figure>

<p>By doing so, I was able to get pip working again without rebuilding the entire venv.</p>

<p>This is highlighted in the python github repo:
https://github.com/python/cpython/issues/95299</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="python" /><category term="pip" /><category term="venv" /><summary type="html"><![CDATA[While creating and using a virtual env created with python 3.12, I installed a package which resulted in pip throwing an error of:]]></summary></entry><entry><title type="html">Forward User IP from Cloudfront distribution</title><link href="https://tilrnt.github.io/aws/cloudfront/terraform/2023/03/16/aws-cloudfront-forward-ip.html" rel="alternate" type="text/html" title="Forward User IP from Cloudfront distribution" /><published>2023-03-16T00:00:00+00:00</published><updated>2023-03-16T00:00:00+00:00</updated><id>https://tilrnt.github.io/aws/cloudfront/terraform/2023/03/16/aws-cloudfront-forward-ip</id><content type="html" xml:base="https://tilrnt.github.io/aws/cloudfront/terraform/2023/03/16/aws-cloudfront-forward-ip.html"><![CDATA[<p>In a recent project, I was troubleshooting an issue with a cloudfront distribution not passing the right request headers to the origin.</p>

<p>According to <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#RequestCustomIPAddresses">HTTP request headers and CloudFront behavior</a>:</p>

<blockquote>
  <p>CloudFront sets the value to the domain name of the origin that is associated with the requested object.</p>

</blockquote>

<p>For <code class="language-plaintext highlighter-rouge">X-Forwarded-Proto</code>:</p>
<blockquote>
  <p>CloudFront removes the header.</p>

</blockquote>

<p>By default, Cloudfront will forward the IP of the distribution to the origin and not the real user’s IP. In addition it will also remove the <code class="language-plaintext highlighter-rouge">X-Forwarded-Proto</code> header.</p>

<p>To resolve the issue we need to add those two headers to the distribution via a custom policy.</p>

<p>But which policy group do we add it to? Cache policy ? Origin request policy?</p>

<p>To provide some context, recent changes to Cloudfront encourages the use of policies to edit the behaviour of the cache key, requests and response headers.</p>

<p>As per their <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-cloudfront-announces-cache-and-origin-request-policies/">Cloudfront Policy blog post</a>, Cache policies are generally used for caching assets. Origin request policies should be used instead to modify the request headers since it is invoked during a cache miss or revalidation. In my use case, I don’t want the user’s IP to be cached but instead forwarded it to the origin so a origin request policy is more appropriate.</p>

<p>Since the cloudfront distribution was built using terraform, I was able to create a custom origin request policy and attach it to the distribution.</p>

<figure class="highlight"><pre><code class="language-terraform" data-lang="terraform"><span class="k">resource</span> <span class="s2">"aws_cloudfront_origin_request_policy"</span> <span class="s2">"example"</span> <span class="p">{</span>
  <span class="nx">name</span>    <span class="p">=</span> <span class="s2">"example-policy"</span>
  <span class="nx">comment</span> <span class="p">=</span> <span class="s2">"example comment"</span>
  <span class="nx">cookies_config</span> <span class="p">{</span>
    <span class="nx">cookie_behavior</span> <span class="p">=</span> <span class="s2">"none"</span>
  <span class="p">}</span>

  <span class="nx">headers_config</span> <span class="p">{</span>
    <span class="nx">header_behavior</span> <span class="p">=</span> <span class="s2">"whitelist"</span>
    <span class="nx">headers</span> <span class="p">{</span>
      <span class="nx">items</span> <span class="p">=</span> <span class="p">[</span><span class="s2">"Host"</span><span class="p">,</span> <span class="s2">"Cloudfront-Forwarded-Proto"</span><span class="p">]</span>
    <span class="p">}</span>
  <span class="p">}</span>

  <span class="nx">query_strings_config</span> <span class="p">{</span>
    <span class="nx">query_string_behavior</span> <span class="p">=</span> <span class="s2">"none"</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="c1"># Attach the above policy to the distribution</span>
<span class="k">resource</span> <span class="s2">"aws_cloudfront_distribution"</span> <span class="s2">"s3_distribution"</span> <span class="p">{</span>
  <span class="p">....</span>


  <span class="nx">enabled</span>             <span class="p">=</span> <span class="kc">true</span>
  <span class="nx">is_ipv6_enabled</span>     <span class="p">=</span> <span class="kc">true</span>
  <span class="nx">comment</span>             <span class="p">=</span> <span class="s2">"Some comment"</span>

  <span class="p">...</span>

  <span class="c1"># Attach the above policy to the distribution</span>
  <span class="nx">default_cache_behavior</span> <span class="p">{</span>
    <span class="p">...</span>


    <span class="nx">origin_request_policy_id</span>  <span class="p">=</span> <span class="nx">aws_cloudfront_origin_request_policy</span><span class="p">.</span><span class="nx">example</span><span class="p">.</span><span class="nx">id</span>
    <span class="nx">path_pattern</span>     <span class="p">=</span> <span class="s2">"/*"</span>
  <span class="p">}</span>
  <span class="p">...</span>

<span class="p">}</span></code></pre></figure>

<p>We added the <code class="language-plaintext highlighter-rouge">Host</code> and <code class="language-plaintext highlighter-rouge">Cloudfront-Forwarded-Proto</code> headers to the custom policy.</p>

<p>In my use case it appends the client IP in the format of <code class="language-plaintext highlighter-rouge">x_forwarded_for: &lt;my ip&gt;</code> in the origin’s logs.</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="aws" /><category term="cloudfront" /><category term="terraform" /></entry><entry><title type="html">Redact PII from Cloudfront logs</title><link href="https://tilrnt.github.io/aws/terraform/2023/03/06/aws-cloudfront-logs-redact.html" rel="alternate" type="text/html" title="Redact PII from Cloudfront logs" /><published>2023-03-06T00:00:00+00:00</published><updated>2023-03-06T00:00:00+00:00</updated><id>https://tilrnt.github.io/aws/terraform/2023/03/06/aws-cloudfront-logs-redact</id><content type="html" xml:base="https://tilrnt.github.io/aws/terraform/2023/03/06/aws-cloudfront-logs-redact.html"><![CDATA[<p>In a recent project, I was asked to investigate how to redact or remove personal identiable information which are stored in cloudfront logs via <code class="language-plaintext highlighter-rouge">AWS WAF</code> for audit purposes.</p>

<p>Using a resource of <code class="language-plaintext highlighter-rouge">aws_wafv2_web_acl_logging_configuration</code> we are able to declare a <code class="language-plaintext highlighter-rouge">redacted_fields</code> block to identity which part of the request to remove. Within the block we can only declare an argument of <code class="language-plaintext highlighter-rouge">method</code>, <code class="language-plaintext highlighter-rouge">query_string</code>, <code class="language-plaintext highlighter-rouge">single_header</code> and <code class="language-plaintext highlighter-rouge">uri_path</code>.</p>

<p>Only the <code class="language-plaintext highlighter-rouge">single_header</code> argument takes a <code class="language-plaintext highlighter-rouge">name</code> attribute which is what I need in my use case.</p>

<p>By entering each of the header name in individual block, I was able to filter it out from the cloudfront logs:</p>

<figure class="highlight"><pre><code class="language-terraform" data-lang="terraform"><span class="k">resource</span> <span class="s2">"aws_wafv2_web_acl_logging_configuration"</span> <span class="s2">"example"</span> <span class="p">{</span>
  <span class="nx">log_destination_configs</span> <span class="p">=</span> <span class="p">[</span><span class="nx">aws_kinesis_firehose_delivery_stream</span><span class="p">.</span><span class="nx">example</span><span class="p">.</span><span class="nx">arn</span><span class="p">]</span>
  <span class="nx">resource_arn</span>            <span class="p">=</span> <span class="nx">aws_wafv2_web_acl</span><span class="p">.</span><span class="nx">example</span><span class="p">.</span><span class="nx">arn</span>
  <span class="nx">redacted_fields</span> <span class="p">{</span>
    <span class="nx">single_header</span> <span class="p">{</span>
      <span class="nx">name</span> <span class="p">=</span> <span class="s2">"header-1"</span>
    <span class="p">}</span>

    <span class="nx">single_header</span> <span class="p">{</span>
      <span class="nx">name</span> <span class="p">=</span> <span class="s2">"header-2"</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span></code></pre></figure>

<p>To test that it works, I was able to trigger fake requests to be sent via Kinesis firehose which populated the logs. Then I accessed the logs via S3 and checked that the headers were marked with <strong>REDACTED</strong> if it has removed it.</p>

<p>More information can be found on the <a href="https://docs.aws.amazon.com/waf/latest/developerguide/logging-management.html" target="_blank">WAF Logging management</a></p>]]></content><author><name>Chee Yeo 2023</name></author><category term="aws" /><category term="terraform" /></entry><entry><title type="html">Get list of Availability Zones in a given region</title><link href="https://tilrnt.github.io/aws/aws-cli/2023/01/31/aws-az-check.html" rel="alternate" type="text/html" title="Get list of Availability Zones in a given region" /><published>2023-01-31T00:00:00+00:00</published><updated>2023-01-31T00:00:00+00:00</updated><id>https://tilrnt.github.io/aws/aws-cli/2023/01/31/aws-az-check</id><content type="html" xml:base="https://tilrnt.github.io/aws/aws-cli/2023/01/31/aws-az-check.html"><![CDATA[<p>A recent terraform deployment for vpsc it failed with an error of <code class="language-plaintext highlighter-rouge">Resource not available in given availability zone</code>.</p>

<p>I used the <code class="language-plaintext highlighter-rouge">aws-cli</code> to get the list of azs from the given region in question:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell">aws ec2 describe-availability-zones <span class="nt">--region</span> eu-west-1</code></pre></figure>

<p>It returns a list as follows:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="o">{</span>
    <span class="s2">"AvailabilityZones"</span>: <span class="o">[</span>
        <span class="o">{</span>
            <span class="s2">"State"</span>: <span class="s2">"available"</span>,
            <span class="s2">"OptInStatus"</span>: <span class="s2">"opt-in-not-required"</span>,
            <span class="s2">"Messages"</span>: <span class="o">[]</span>,
            <span class="s2">"RegionName"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"ZoneName"</span>: <span class="s2">"eu-west-1a"</span>,
            <span class="s2">"ZoneId"</span>: <span class="s2">"euw1-az2"</span>,
            <span class="s2">"GroupName"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"NetworkBorderGroup"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"ZoneType"</span>: <span class="s2">"availability-zone"</span>
        <span class="o">}</span>,
        <span class="o">{</span>
            <span class="s2">"State"</span>: <span class="s2">"available"</span>,
            <span class="s2">"OptInStatus"</span>: <span class="s2">"opt-in-not-required"</span>,
            <span class="s2">"Messages"</span>: <span class="o">[]</span>,
            <span class="s2">"RegionName"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"ZoneName"</span>: <span class="s2">"eu-west-1b"</span>,
            <span class="s2">"ZoneId"</span>: <span class="s2">"euw1-az3"</span>,
            <span class="s2">"GroupName"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"NetworkBorderGroup"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"ZoneType"</span>: <span class="s2">"availability-zone"</span>
        <span class="o">}</span>,
        <span class="o">{</span>
            <span class="s2">"State"</span>: <span class="s2">"available"</span>,
            <span class="s2">"OptInStatus"</span>: <span class="s2">"opt-in-not-required"</span>,
            <span class="s2">"Messages"</span>: <span class="o">[]</span>,
            <span class="s2">"RegionName"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"ZoneName"</span>: <span class="s2">"eu-west-1c"</span>,
            <span class="s2">"ZoneId"</span>: <span class="s2">"euw1-az1"</span>,
            <span class="s2">"GroupName"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"NetworkBorderGroup"</span>: <span class="s2">"eu-west-1"</span>,
            <span class="s2">"ZoneType"</span>: <span class="s2">"availability-zone"</span>
        <span class="o">}</span>
    <span class="o">]</span>
<span class="o">}</span></code></pre></figure>

<p>That allows me to troubleshoot the missing az and switch to another region with the right number of azs.</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="aws" /><category term="aws-cli" /><summary type="html"><![CDATA[A recent terraform deployment for vpsc it failed with an error of Resource not available in given availability zone.]]></summary></entry><entry><title type="html">Fix docker network build issues</title><link href="https://tilrnt.github.io/docker/networking/2023/01/31/docker-build-issues.html" rel="alternate" type="text/html" title="Fix docker network build issues" /><published>2023-01-31T00:00:00+00:00</published><updated>2023-01-31T00:00:00+00:00</updated><id>https://tilrnt.github.io/docker/networking/2023/01/31/docker-build-issues</id><content type="html" xml:base="https://tilrnt.github.io/docker/networking/2023/01/31/docker-build-issues.html"><![CDATA[<p>When running a docker build after the docker daemon is updated, the build logs keep failing with:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Could not connect to archive.ubuntu.com:80 (185.125.190.36), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.39), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.39), connection timed out
...
</code></pre></div></div>

<p>It turns out that the docker daemon is unable to use the host networking to do a <code class="language-plaintext highlighter-rouge">apt-get update</code> within the ubuntu container during the build process and as such is unable to call out to the remote host.</p>

<p>To fix the issue system wide we can create a <code class="language-plaintext highlighter-rouge">/etc/docker/daemons.json</code> file with the right naemserver entries and restart the docker daemon:</p>

<p>Firstly, run the following to get the host DNS server ip</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell">nmcli dev show | <span class="nb">grep</span> <span class="s1">'IP4.DNS'</span></code></pre></figure>

<p>Create a file at <code class="language-plaintext highlighter-rouge">/etc/docker/daemons.json</code> with the following entries:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="o">{</span>
	<span class="s2">"dns"</span>: <span class="o">[</span><span class="s2">"my-nameserver-ip-from-above"</span>, <span class="s2">"8.8.8.8"</span><span class="o">]</span>
<span class="o">}</span></code></pre></figure>

<p>Restart the docker daemon</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nb">sudo </span>systemctl restart docker.service

<span class="nb">sudo </span>systemctl status docker.service</code></pre></figure>

<p>As a test we can run the following image to see if it can do a nslookup of google.com from within a container:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell">docker run busybox nslookup google.com</code></pre></figure>

<p>The response should include the DNS server address from above:</p>

<figure class="highlight"><pre><code class="language-shell" data-lang="shell">Server:		X.X.X.X
Address:	X.X.X.X:53

Non-authoritative answer:
Name:	google.com
Address: 172.217.16.238

Non-authoritative answer:
Name:	google.com
Address: 2a00:1450:4009:819::200e</code></pre></figure>

<p>Hope it helps someone!</p>]]></content><author><name>Chee Yeo 2023</name></author><category term="docker" /><category term="networking" /><summary type="html"><![CDATA[When running a docker build after the docker daemon is updated, the build logs keep failing with:]]></summary></entry></feed>