Skip to main content

HCL Volt on Synology NAS

After playing a bit with HCL Volt locally on my machine, I've decided it's time to make Volt easily accessible for my other devices. Especially, testing a mobile UI always works better when you can directly try it from different devices.
Luckily, I have bought a Docker enabled Synology NAS earlier this year and I'm already running few services there in a similar way. Now I just needed to make sure that I can run HCL Volt there too. I'll continue to use the domino-docker project because it allows me to create a pre-configured Volt environment. Synology Docker environment is a bit limited and the configuration doesn't allow you to use all the options that you can use directly from the Docker command line, but usually, you can find some workarounds if you need something specific.

My main goals were:

  • share the Docker image in my network
  • have a HCL Volt server with a publicly trusted certificate
  • have it accessible from both inside and outside of my local network
  • find a process that would allow me to easily test V12 pre-release docker images

Sharing the Docker image

In my previous post, I've shown how to build the HCL Volt Docker image. If you work on multiple machines, then you'll have to figure out a way how to share the image. One option is to export the image in the same way as HCL is using for the official images. It is good when you want to transfer the image to some unknown infrastructure, but if you want to do more experiments, you may want to host a private Docker registry. 

If you enable Docker on Synology, you are just adding the Docker host capability to it, not the ability to share the images using a registry, so you need to install this separately. When I first installed Docker on Synology I was a bit confused by the Registry tab in the configuration. The tab allows you to search in configured external registries.

There are nice tutorials online on how to create a Docker Registry on Synology. I've deployed Docker Registry 2.0 (of course using Docker) to my NAS and I've added a joxit/docker-registry ui web ui for easier management. I've installed them side by side, so the docker-registry-ui just serves the static content. It didn't work when I blindly when for latest tag as I need to use static tag image.  

Tutorials you can follow:

If you've configured authentication for your repository (which you should), you need to log in first using docker login command. e.g.:

docker login registry.pris.to

Once you sign in, the credentials are permanently stored in the operating system credentials store (this can be configured in .docker/config.json).

Then you just need to tag the image with repository prefix:

docker tag hclcom/volt registry.pris.to/hclcom/volt

and push it 

docker push registry.pris.to/hclcom/volt

If you have a web ui, you can directly check the result (I already had my image there, so it wasn't modified this time)


Few words of warning here ... If you keep experimenting with the image build process and keep pushing updated images to the registry over and over, keep in mind that the images are not being overwritten, they are always added as a new version. It makes sense as the registry doesn't know where the image is used and if the "old" version is not actually needed e.g. in some other image. You need to manually delete them and then work with the garbage collector in the registry to get rid of the blobs.

With your image in the registry, you can use it from different machines and of course, form the Synology too. You must add the registry url and mark it as active.

now you can search for the Volt image and use it

Configuring HCL Volt container

My desired setup is a bit more complex than the default behavior with the automatic generation of a self-signed certificate. I use the Synology built-in reverse proxy and I've configured it to use a wild cart Let's Encrypt certificate. It currently can't be automatically renewed because it needs a DNS record change to verify the domain ownership, but it's a single place where I have this certificate, so I can live with doing this every 3 months manually (and if not, I can probably script the DNS record change later). You can find the procedure here https://vdr.one/how-to-create-a-lets-encrypt-wildcard-certificate-on-a-synology-nas/.

I'm running just pure HTTP on the Domino server and reverse proxy takes care of the HTTPS. This means I can't enable automatic HTTPS redirect which is defined in the standard config.json. I need to supply my own config.

Preparing the data directory and config directory

Synology Docker doesn't work with native Docker volumes, so you normally just prepare a directory on your Synology volume and bind it to the container. In my case, I also need a special configuration file, so I have 2 directories:
  • /volume1/docker/volt/config
  • /volume1/docker/volt/data
When the data directory is bound to the container, it keeps the Linux permissions. Domino in the container is running with a non-root user notes (uid:1000) and may not be able to create the required subfolders/files if the owner is different. The easiest way to get around the problem for me was to SSH to my nas and just change the ownership to uid 1000. It doesn't matter that the uid is not used by the NAS, unless you want to manage the permissions also form the Synology UI. 

Alternatively, you can configure uid of the notes user in the container using DominoUserID environment variable.

Adjusting the config.json

I need to disable the https that is enabled in the provided config.json, but I could not just simply skip the file as there are other options that are required for Volt, e.g. session authentication. I've deleted most of the server document configuration and kept just:

Mapping the volumes

We must tell Docker where our configuration file and data folders are. 

I had to move the config.json outside of data directory, because it kept interfering with the folder permissions - in this case, it created the subfolders as root and the notes user could not access the folders again.

Configuring the environment variables

Environment variables are key to the container configuration. Most of them stayed as in the default sample, but there are some significant changes:
  • NoSSL - it's my custom flag to skip the certificate generation. You can just ignore it when using the standard build.
  • ServerName - must be the same as the name of the container. We are not able to pass the hostname using any Synology options, but the internal DNS will allow the server to find itself.
  • ConfigFile - path to my modified config.json without https enablement. 
  • DOMINO_VOLT_URL - the external base URL for the volt - must contain /volt-apps. Volt uses this internally to generate links to resources


Configuring the ports and https

Only port 80 is needed if all you want is web UI access.



Then you need to configure the reverse proxy in Synology Control Panel

And you just need to make sure the correct certificate is used.


Now you should be ready to run your container.

Final words

The Synology Docker is a nice toy for home/test usage. The UI is limited, so I would not recommend using this for production but works well for me so far. The described Volt setup is really simple, basically just to show that it can be done. There are still some issues I want to check:
  • properly work with hostnames. With the current setup where the container name = server name, it would not work nicely when the server is connected to other servers.
  • shutdown timeout - Synology UI doesn't allow me to specify the timeout. Domino needs a bit more time than those default 10s. 
Many thanks again to Daniel Nashed and Thomas Hampel who are maintaining the domino-docker project that is capable of building these nice Volt/Domino Docker images.



Comments

Popular posts from this blog

Microsoft Word black box in numbering issue

This is awkward post, primarily to save the solution for future me. I have seen many people mentioning this problem over years and as I've struggled with it several times, I needed to find final and permanent solution.


All editions of Microsoft Word from time to time suffer from bug in numbering. Instead of a number, black box is displayed. Sometimes it happens right after document is opened, sometimes during editing. Probably some internal structure of document gets corrupted, so based on level of corruption, different fixes could help. Many of them are listed at https://answers.microsoft.com/en-us/office/forum/office_2010-word/ms-word-header-styles-are-showing-black-boxes/c427b21c-dcda-46ce-a506-b9a16c9f2f3f


I took different approach. Since docx is just standard zip package with xml files, I decided to try if I can fix it manually. And it worked.

When I extracted the docx, there was file called numbering.xml in word folder. When I examined that file, I found strange section in h…

Using JAX-RS inside NSF

Last week Christian G├╝demann published new release of SmartNSF on OpenNTF that contains cool new feature that Christian tweeted before. With new CUSTOM strategy it allows direct execution
of Java code from REST API defined in router configuration. It's even better than it sounds as it initializes facesContext and XPages application if needed, so even access to beans works.

I needed to start to build new REST APIs for few databases, so I decided to test new SmartNSF option and also other available options for REST APIs on Domino (there are several, check references at the end for more info). Since CUSTOM strategy requires dependency on SmatNSF in NSF project and also implementation of CustomRestHandler interface, it'd force me to do more changes in my code that I wanted to. If I need to change my code, why not adjust it for JAX-RS spec anyway.

Existing Domino JAX-RS options had to packaged as plugins, which make it hard to call code that is currently in NSF. I could make it wor…

WSL, HCL Volt and some Docker

My list of new technologies to try was growing fast in past months, but now I finally can try to catch up with all the cool improvements that can be used to enhance my/your dev experience.
Microsoft has enhanced the Windows Subsystem for Linux this year and Docker completely changed the way Docker Desktop for Windows is integrated into the operating system. The most important change for me was that I can finally run Docker Desktop and VMWare Workstation on my machine in parallel. When I was looking for some good use cases to try how it works, HCL Domino was a logical choice. HCL started to even publish official Docker images for every release and some pre-releases are only available as Docker images. I have many test Domino machines running in VMs, but I had no HCL Volt. If you don't know that HCL Volt is - it's a new low-code platform that brings HCL Form Builder experience, now know as HCL Leap, to HCL Domino, which then serves as a data store and application server. When I…