My list of new technologies to try was growing fast in past months, but now I finally can try to catch up with all the cool improvements that can be used to enhance my/your dev experience.
Microsoft has enhanced the Windows Subsystem for Linux this year and Docker completely changed the way Docker Desktop for Windows is integrated into the operating system. The most important change for me was that I can finally run Docker Desktop and VMWare Workstation on my machine in parallel. When I was looking for some good use cases to try how it works, HCL Domino was a logical choice. HCL started to even publish official Docker images for every release and some pre-releases are only available as Docker images. I have many test Domino machines running in VMs, but I had no HCL Volt. If you don't know that HCL Volt is - it's a new low-code platform that brings HCL Form Builder experience, now know as HCL Leap, to HCL Domino, which then serves as a data store and application server. When I saw some demos last year at Engages, I've already mentioned that it may be a good idea to throw away Domino Designer and bring this technology to Domino. HCL decided to create a separate offering from that (you need to pay a small extra fee to get a Volt license on top of your Notes/Domino licenses), but I think it can work very well for many customers. Time to try how it works ...
If you just want to try the HCL Volt experience, you can try it for free at https://voltsandbox.hcltechsw.com/ , but if you are a partner that wants to dive deeper into the tech, running it locally may be a better option.
Windows Subsystem for Linux and Docker Desktop
I'm not going to cover the installation of WSL or Docker Desktop here as the internet is full of articles and videos on how to do it. The installation is straight forward and worked with no issues for me. You just need to make sure you are running Windows 10 version 2004 or higher.
My typical issue with any new software now is that I like to partition my drive to have a separate system drive, app drive, and a data drive. This worked well in past when I was able to move things to new machines more easily, but these days may apps are just tossing everything into users' profile directory, which forces me to fight for free space on my system partition. Docker Desktop is no different.
The problem is that all Docker data are stored in a Hyper-V virtual drive .vdhx file which can grow significantly - all images, volumes, containers go into this image, After brief experiments, my data file has 9GB (which is actually more than I have currently left on C: drive). Luckily, it's quite easy to move the file to a different drive.
There are 2 distributions created by Docker Desktop
The first one is and remains small - roughly 100MB, but the docker-desktop-data had to be moved in my case. To do so:
- Shutdown WSL
- Export docker-desktop-data
wsl --export docker-desktop-data E:\docker-desktop\docker-desktop-data.tar
- Unregister current docker-desktop-data distribution
wsl --unregister docker-desktop-data
- Import docker-desktop-data back
wsl --import docker-desktop-data E:\docker-desktop\data E:\docker-desktop\docker-desktop-data.tar --version 2
HCL Volt runs on top of HCL Domino, it's a good idea to try a pure Domino container first. There are basically two ways to work with Domino containers:
- Official Docker images that can be downloaded from FlexNet
- Community project domino-docker
The first approach is easy to use, well document by HCL, but quite limited. You will get a Domino server that it's waiting for remote configuration, not a server you can start to use. Everything works fine in the WSL environment.
The second approach requires a bit more work to set up, but once you get yourself familiar with the architecture, it allows you to create a fully configured server with a single command (I know that I'm running this on Windows, so it should be a single click, but we are getting beyond basic end-user skills here, right?). I'll continue to describe steps I had to take to get a Domino and Volt server running using the domino-docker. If you want to run Volt using domino-docker, you must build Domino image first, so let's start with that.
Running HCL Domino/Volt
The domino-docker project requires you to download the installation binaries (not the Docker image) from the FlexNet and then it builds a new image for you. This way it complies with the HCL license as you must be a valid partner/customer to have access to these. Then you just need to get your copy/clone of the domino-docker repository from https://github.com/IBM/domino-docker.
Cloning of domino-docker repository
We are running all this from Windows and this may get us into trouble. The build is done using bash scripts and was primarily designed for Linux system, so we need to make sure that Git is setup up correctly for everything to work. I'd suggest to have these settings set by default anyway, so I put them into my global git config.
- Keep the line endings unchanged (the default behavior is to convert them to cr+lf)
git config --global core.autocrlf false
- Add support for symlinks (there are some links between scripts in the repo)
git config --global core.symlinks
Now you are ready to clone the repo.
Optionally, you can do the checkout directly from a Linux sub-system, but if you want to have the files accessible on a NTFS drive, you need to make sure that it supports metadata handling e.g. by creating/adjusting the /etc/wsl.conf with
options = "metadata"
Without this, the chmod operations during git clone would fail.
Building Domino image
As mentioned above, you must download HCL Domino for Linux installation files (and FP installation files too if you want them) from HCL. Then you put them into software subfolder. The build scripts are really user friendly, so they even point you to FlexNet urls if you don't have the required files.
The build script creates a temporary nginx container to host those files, because you don't want to have them in the image, you just want to have the installed product. Here comes another glitch when running on Windows.
If you installed Git with default options, you will get git-bash.exe associated with .sh files, so you can run the build script directly, but it's not a WSL based bash, but MINGW32 environment, which behaves differently. You may have other bash implementations (I have one from Cmder). The script runs, but it will fail to map the software folder to the nginx, because it uses different mount paths. WSL uses /mnt/<drive>/ syntax
The best way to get around this limitation is to finally dive fully into WSL and use a proper Linux subsystem. I just went to the Microsoft Store and installed Ubuntu 20-04. While doing that I also installed the new Microsoft Terminal, which has a nice integration to WSL, so you can easily start new tabs with different shells.
After you install the subsystem, don't forget to enable the Docker Desktop integration in the configuration of Docker Desktop.
Without this step, you'd have to do the integration manually in the same way as was needed during WSL1 times.
No just navigate to the domino-docker folder (remember the /mnt/<drive> syntax, so in my case /mnt/f/Docker/domino-docker) and run:
After a couple of minutes, you will get a Domino image available in your local Docker environment. It's not inside the subsystem that you are running the build script from, it's in the "main" Docker subsystem, because the docker commands from e.g. Ubuntu now talk to the Docker on your host, so you can manage your Docker infrastructure from there, or from PowerShell, or from cmd.exe, all point to same Docker.
Building Volt image
My main goal was to get Volt running, so I directly went ahead and build a Volt image. All you need is to download the Volt files from FlexNet and running:
Now you have both Volt and Domino images available and you can create containers from them.
Volt server requires a hostname, so you need to pick a host name first and point it to your local machine, e.g. by editing your local hosts file.
You also need to tell Docker where you want to place your Domino/Volt data. By default, it should go into a Docker volume, which is stored inside that docker-desktop-data.vhdx file that I moved earlier to different drive. If you are running the Docker command from a Linux subsystem, you can also try to bind a directory to the volume by using the same path syntax /mnt/<drive> . It will be way slower (documentation says 4 times, but it's probably even slower), but you can than see the files directly on you file-system, which may be useful. I used that this sample.
docker run -it
Make sure you don't forget the ConfigFile=config.json and 443 port configuration, which is missing in the project documentation.
If you don't want to edit the evironment variables directly, you can supply them using file, or you can move all your configuration into a Docker Compose file and just run it from there. Daniel recently added some nice new samples to the repo https://github.com/IBM/domino-docker/blob/develop/examples/docker-compose/volt.yml
The image will generate a self-signed certificate and configure a new Volt-enabled Domino server for you.
If everything worked well, you should see a Domino server started in your console
and a Volt UI ready to use (after you accept the self-signed certificate and sign it)
The URL is https://<host>/volt-apps/secure/org/ide/manager.html.
Files are directly available in Windows in this case (but as metioned above, it slows down the container significantely).
Accessing the Domino console
If you are wondering how to access the Domino console, one way is to use docker exec command:
docker exec -it volt /opt/nashcom/startscript/rc_domino_script monitor
You can start it directly from PowerShell if you want to.
Of course, you can also just download the admin.id and connect to the server from a Notes Client.
With this approach I can easily create test environment without having to deal with full VMs. There is also way more what the build and startup scripts can do, so I plan to explore them further.
Many thanks to Daniel Nashed and Thomas Hampel, who are maintaining the domino-docker project and also helped me when I struggled with the setup.