-
Install Vault in docker image
To install the
vault
client in a docker image, we can follow the Official Vault install documentation.However, this throws an error of
operation not permitted
when trying to run the vault client. This is due to vault trying to lock memory to prevent sensitive values being swapped to disk. This is a Reported Issue and also mentioned on Vault docker hub.We can overcome this by running the container with
--cap-add
flag:docker run --cap-add=IPC_LOCK -d --name=dev-vault mycustomimage vault
However, this is not required if using
Raft integrated storage
.To resolve the issue, we can instead use a multistage build to copy the vault binary into the image rather than installing via package manager.
An example Dockerfile:
FROM hashicorp/vault:1.18 as vaultsource FROM ubuntu:22.04 as base COPY --from=vaultsource /bin/vault /usr/local/bin/vault
Using the above, we can create a custom image with a vault CLI that doesn’t throw an operation not permitted error.
-
Expired GH client GPG keys
When running
sudo apt update
recently, I received an error when trying to update the gh client:This is due to the GPG key used to verify the .deb and .rpm repository expiring on the 6th September 2024. This is reported as a GH client install issue.
To resolve it, one can run the provided script which downloads and reinstalls the new GPG key, fixing the error above:
By running the above, I was able to run apt update and apt upgrade again.
Hope it helps!
-
RVM on Ubuntu 22.04 Jellyfish
I had to setup RVM recently on my Ubuntu 22.04 desktop. However, the original installation instructions were plagued with issues, namely with openssl version on Ubuntu 22.04 which caused conflicts with the ruby installation.
To fix this issue we need to:
- Install the Ubuntu version of RVM
- Install an older openssl version as a package in rvm
- Reference the above openssl package during the rvm install
Firstly, I had to install RVM Ubuntu. I followed the original instructions in the README. Ensure that any existing RVM installations are removed first.
After RVM is installed properly, to install ruby 3.0.0, these were the steps I took:
Only the above steps allowed me to get RVM to work.
Hope it helps someone!
-
Python 3.12 'pkgutil has no attribute ImpImporter' error
While creating and using a virtual env created with python 3.12, I installed a package which resulted in
pip
throwing an error of:This occured after installing
setuptools
which was a dependency of another package. As a result, I was unable to usepip
itself to remove setuptools.Rather than re-create a new venv, the only way to to resolve this issue is to reinstall pip by downloading the pip install file from
https://bootstrap.pypa.io/get-pip.py
and running it again:By doing so, I was able to get pip working again without rebuilding the entire venv.
This is highlighted in the python github repo: https://github.com/python/cpython/issues/95299
-
Forward User IP from Cloudfront distribution
In a recent project, I was troubleshooting an issue with a cloudfront distribution not passing the right request headers to the origin.
According to HTTP request headers and CloudFront behavior:
CloudFront sets the value to the domain name of the origin that is associated with the requested object.
For
X-Forwarded-Proto
:CloudFront removes the header.
By default, Cloudfront will forward the IP of the distribution to the origin and not the real user’s IP. In addition it will also remove the
X-Forwarded-Proto
header.To resolve the issue we need to add those two headers to the distribution via a custom policy.
But which policy group do we add it to? Cache policy ? Origin request policy?
To provide some context, recent changes to Cloudfront encourages the use of policies to edit the behaviour of the cache key, requests and response headers.
As per their Cloudfront Policy blog post, Cache policies are generally used for caching assets. Origin request policies should be used instead to modify the request headers since it is invoked during a cache miss or revalidation. In my use case, I don’t want the user’s IP to be cached but instead forwarded it to the origin so a origin request policy is more appropriate.
Since the cloudfront distribution was built using terraform, I was able to create a custom origin request policy and attach it to the distribution.
We added the
Host
andCloudfront-Forwarded-Proto
headers to the custom policy.In my use case it appends the client IP in the format of
x_forwarded_for: <my ip>
in the origin’s logs. -
Redact PII from Cloudfront logs
In a recent project, I was asked to investigate how to redact or remove personal identiable information which are stored in cloudfront logs via
AWS WAF
for audit purposes.Using a resource of
aws_wafv2_web_acl_logging_configuration
we are able to declare aredacted_fields
block to identity which part of the request to remove. Within the block we can only declare an argument ofmethod
,query_string
,single_header
anduri_path
.Only the
single_header
argument takes aname
attribute which is what I need in my use case.By entering each of the header name in individual block, I was able to filter it out from the cloudfront logs:
To test that it works, I was able to trigger fake requests to be sent via Kinesis firehose which populated the logs. Then I accessed the logs via S3 and checked that the headers were marked with REDACTED if it has removed it.
More information can be found on the WAF Logging management
-
Fix docker network build issues
When running a docker build after the docker daemon is updated, the build logs keep failing with:
Could not connect to archive.ubuntu.com:80 (185.125.190.36), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.39), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.39), connection timed out ...
It turns out that the docker daemon is unable to use the host networking to do a
apt-get update
within the ubuntu container during the build process and as such is unable to call out to the remote host.To fix the issue system wide we can create a
/etc/docker/daemons.json
file with the right naemserver entries and restart the docker daemon:Firstly, run the following to get the host DNS server ip
Create a file at
/etc/docker/daemons.json
with the following entries:Restart the docker daemon
As a test we can run the following image to see if it can do a nslookup of google.com from within a container:
The response should include the DNS server address from above:
Hope it helps someone!
-
Get list of Availability Zones in a given region
A recent terraform deployment for vpsc it failed with an error of
Resource not available in given availability zone
.I used the
aws-cli
to get the list of azs from the given region in question:It returns a list as follows:
That allows me to troubleshoot the missing az and switch to another region with the right number of azs.
-
Setting up custom learning rate schedulers in TF 2.0
In ML training, it is essential to understand and utilize an approach to adjusting the learning rate of a model. It helps with applying regularization to the model to prevent overfitting.
Learning rate decay
is an example of a regularization technique which dynamically adjusts the learning rate of a model during its training process. It reduces the learning rate of the model over epochs or steps.There are 2 main approaches to using learning rate schedulers in TF 2.0:
-
Using the callback
LearningRateSchduler
and applying your own function -
Creating a custom subclass of
tf.keras.optimizers.schedules.LearningRateSchedule
What is the difference ? The main difference is that approach 1 is meant to be called from the
callbacks
kwargs in themodel.fit
call whereas the second approach allows you to pass it as an input to the optimizerlearning_rate
kwarg.1. Using the LearningRateScheduler callback
The callback class requires a function of the form:
The custom function needs to handle 2 parameters:
epoch
andlr
(learning rate). This callback will be invoked at the beginning of every epoch, passing in the current epoch and optimizer learning rate. The custom function will need to return the new learning rate value, which the callback uses to update the learning rate of the optimizerTo invoke the example callback above:
2. Subclass the LearningRateSchedule base class
The
LearningRateSchedule
base class adjusts the learning rate per step / batch of training, rather than over an entire epoch. This is useful if you are training your model in steps rather than epochs. For example, in GAN trainingExample of creating a custom LR scheduler class:
During training, the subclass would be passed directly into the
learning_rate
kwargs of an optimizer object:Resources
-
-
Using Rails ecnrypted credentials in Rails 5.2
From Rails 5.2 onwards, there is no longer a
config/secrets.yml
file created whenever a rails app is created. The default mechanism is to usecredentials
to unify the management and storage of confidential information.Within a new rails 5.2 app, you will see a
config/credentials.yml.enc
file which is encrypted by default using theconfig/master.key
. Themaster.key
file is the master key which is used to encrypt/decrypt data stored within thecredentials.yml.enc
file and as such, it is added to.gitignore
by default.To view the contents of the encrypted file, you need to run the following:
EDITOR="vim" bin/rails credentials:show
This will display what was the contents of a file such as
config/secrets.yml
.Mine contains the following out of the box:
# aws: # access_key_id: 123 # secret_access_key: 345 # Used as the base secret for all MessageVerifiers in Rails, including the one protecting cookies. secret_key_base: e86bd7e58727da9b818f0f5a8851e8e2c99679bb9ab0728e6d87fbf31febc26ff8b649dda74e8b5632d16521afb30066254a2e4d6869e2fb57cb93f072b3e0ef
To edit/add new entries to the file:
EDITOR="vim" bin/rails credentials:edit
This will allow you to edit/update the entries within
config/credentials.yml.enc
You can still use the old YAML syntax to declare variables. For example:
EDITOR="vim" bin/rails credentials:edit # Add the following snippet below foo: bar: baz
To access any of the data during runtime, we can use
Rails.application.credentials
, which returns aActiveSupport::EncryptedConfiguration
objectFor example, to access the default secret_key_base:
Rails.application.credentials.secret_key_base
To access nested values, we can use:
Rails.application.credentials.foo[:bar] # => baz
-
Fixing dep update reference not a tree error
When running
dep ensure -update <dependency> -v
to update a depedency, one might run into the following error:Unable to update checked out version: fatal: reference is not a tree:
This is due to the cached version of the dependency in
GOPATH/pkg/dep/sources/<depname>
being indetached HEAD
state.To fix this, cd into the dep cache folder and update it manually:
cd GOPATH/pkg/dep/sources/<depname> git checkout master # or branch specified in Gopkg.toml git pull
Run
dep ensure -update <dependency>
again and it should work again.This is an open issue on the golang dep repository:
-
Using journalctl to check hardware / bootup errors
While trying to figure out a hardware issue during startup, I discovered that on systemd systems, the
journald
daemon collects logs from early in the boot process.One can use
journalctl
to view systemd logs for issues:sudo journalctl -b -p err
One can then page through the errors list, if any, to resolve any issue.
Resources
-
yum update Protected multilib versions
During a recent
yum update
, it failed with multiple lines ofError: Protected multilib versions: iptables-1.4.21-24.el7.i686 != iptables-1.4.21-23.el7.x86_64 ( more error lines like above ) ....
From above, what the above error means is that for that specific package, version “X” of an RPM is installed for architecture
x86_64
while yum was also instructed to install version “Y” of that same rpm for architecturei686
.Rather than resolving each depedency manually, I enabled the Workstation Optional RPMs repo to locate the missing rpms and the problem was resolved:
sudo subscription-manager repos --enable=rhel-7-workstation-optional-rpms sudo yum clean-all sudo rm -rf /var/cache/clean sudo yum update
Additional Resources
-
Using kube-proxy to access deployments
When we create a service on k8 cluster, it is often initialized with type of
ClusterIP
.We can still access the service using
kubectl proxy
.kubectl proxy
allows one to interact with the API without the need for a Bearer token.Assuming we have a service called
guestbook
, we can access it as below:kubectl proxy > /dev/null & KC_PROXY_PID = $! SERVICE_PREFIX=http://localhost:8001/api/v1/proxy GUESTBOOK_URL = $SERVICE_PREFIX/namespaces/default/services/guestbook
-
Resolving dep ensure conflicts
Sometimes when collborating on a golang project, it is possible to get dependency conflicts after running
dep ensure
.The following is an approach I take to resolve them:
-
Run
dep ensure -v
with verbose to debug the issue. -
Delete the repo’s Gopkg.lock
-
Clear out the ~GOPATH/src/pkg directory
-
Re-run
dep ensure -v
-
-
Using virtualenv in python
When working with python, sometimes it is important to create isolated environments due to compatibility issues with the libs being used. Some examples that come to mind is the dependency of pyOpenSSL lib by certbot or setting up a deep learning environment.
To install virtualenv:
pip install virtualenv
To create an isolated environment based on a specific python version:
virtualenv -p /usr/bin/python2.7 <path to env>
Without the -p option, virtualenv defaults to the current python version.
To activate the virtualenv:
source <path to env>/bin/activate
You should see the name of the virtualenv in brackets to the left of the terminal. As an extra step, do python -v to check that the version is the one specified above.
To exit the virtualenv and return to the terminal:
deactivate
Also install virtualenvwrapper as it provides some useful utility commands to list and create virtualenvs:
pip install virtualenvwrapper
To list all available virtualenvs, for example:
lsvirtualenv
Additional Resources
-
golang pointer receiver error
Assuming we have an interface declaration in go as so:
type Stringer interface { String() string }
We can create a custom struct to implement the interface like so:
type struct MyStruct{ Value string } func (m *MyStruct) String() string { return m.Value }
If we try to assign a type of MyStruct to the Stringer interface, we will receive an error of __ MyType does not implement Stringer (String method has pointer receiver)__
mytype := MyStruct{Value: "test"} var s Stringer s = m // throws the error above
This is because the interface is defined on pointer types of *MyType and not the types of MyType
To fix the error we just need to use the pointer type:
mytype := MyStruct{Value: "test"} var s Stringer s = &m // no errors
-
Vendoring private github repos using dep
When using
dep
for vendoring dependencies in a go project, I came across the issue of pulling down a private github repo.dep ensure -v
keeps reporting of an error with the repo.To overcome this, you can create a
~/.netrc
with your credentials to access the private repo. For example, when using github, you first need to create anPersonal Github Token
within yourAccount Settings
. Then create a~/.netrc
file with the following format:machine github.com login [GITHUB USERNAME] password [GITHUB TOKEN]
This is also documented in the dep repo:
-
Kubectl and KUBECONFIG
While working on a kubernetes based project, I had to set the $KUBECONFIG env variable in order to access a private cluster.
Later, I started minikube and ran kubectl config view. This resulted in all my kubectl calls to the private cluster failing.
The reason is due to the way kubectl config behaves when it detects the $KUBECONFIG env variable. According to the docs:
As stated in point 2 above, since my $KUEBCONFIG is still present, when I started minikube, it merges the minikube settings into file pointed to by $KUBECONFIG, updating it by merging the contents of minikube, and sets minikube as the current context. Which is why all the kubectl calls are going to the minikube cluster only.
As a note to self, I need to remember to
-
Redhat Subscription Renewal
Recently I had to update my Redhat subscription.
Afterwards, the subscription-manager application kept showing as “No valid subscriptions found”. This was caused by a mismatch between the type of RHEL system I was running and the actual subscription type itself.
To ensure that one renews to the right subscription type, simply use the following:
-
Checking for open ports in go lang
I had to create a periodic status check for an open port on a specific host for a service I was creating recently. The status check has to work for both the localhost in development and also on the remote host.
Using the [net][net pkg] package in Go I was able to come up with the following snippet for testing the localhost port:
We use [Listen][net pkg Listen] above as it works for localhost only.
For testing the remote port, we can use [DialTimeout][net pkg DialTimeout] as it accepts a custom timeout parameter, which we can use to check for timeout errors:
Resources
subscribe via RSS