In the mean time you could run the container in the foreground with
the "-x9" full debugging flag and maybe look for log messages hinting
that OpenSSL is failing to initialize itself.
After further consideration ... please do exactly that.
Look for log messages beginning with "crypto:" during the initial startup of the server. There *will* be an error describing what went wrong. My guess is that somehow WebCit is picking up your certificate and Citadel Server isn't ... which is weird considering they're both looking in the same place.
You are right but the files are there, more on that below
NO neither one of them are picking up the certs. port 80 and 443 are running behind a webserver / proxy. Remember ( --http-port=8080 --https-port=8443 )
So if you go to port 8443 you get this:
Secure Connection Failed
An error occurred during a connection to srv2.tamer.pw:8443. Cannot communicate securely with peer: no common encryption algorithm(s).
Here in GREEN is what I did, and the result in black.
# docker run -i --rm --network host --volume=/usr/local/citadel:/citadel-data citadeldotorg/citadel --http-port=8080 --https-port=8443 -x9 >>citadel.out 2>>citadel.err
# cat citadel.err | grep crypto citserver[7]: crypto: generating RSA key pair webcit[9]: crypto: [re]installing key "/citadel-data/keys/citadel.key" and certificate "/citadel-data/keys/citadel.cer" citserver[7]: crypto: generating a self-signed certificate citserver[7]: crypto: cannot read the private key citserver[7]: crypto: using certificate chain keys/citadel.cer citserver[7]: crypto: SSL_CTX_use_certificate_chain_file failed: No such file or directory citserver[7]: crypto: SSL failed: ../../context.has not been initialized # ls -alh /usr/local/citadel/keys total 8.0K drwx------ 2 root root 4.0K Apr 6 19:35 ./ drwxr-xr-x 6 root root 4.0K Apr 11 19:54 ../ lrwxrwxrwx 1 root root 49 Apr 6 19:35 citadel.cer -> /etc/letsencrypt/live/srv2.tamer.pw/fullchain.pem lrwxrwxrwx 1 root root 47 Apr 6 19:34 citadel.key -> /etc/letsencrypt/live/srv2.tamer.pw/privkey.pem # cat /usr/local/citadel/keys/citadel.cer -----BEGIN CERTIFICATE----- MIIDqzCCAzCgAwIBAgISBub/j7QwilgbIDgv8Lo8lPfeMAoGCCqGSM49BAMDMDIx
So you are right, but the keys are there.
Further more these are the same keys the web browser is using on port 443, meaning the certs are fine. And they are linked to the correct location/folder.
What am I missing?
Here in GREEN is what I did, and the result in black.
# docker run -i --rm --network host --volume=/usr/local/citadel:/citadel-data citadeldotorg/citadel --http-port=8080 --https-port=8443 -x9 >>citadel.out 2>>citadel.err
# cat citadel.err | grep crypto citserver[7]: crypto: generating RSA key pair webcit[9]: crypto: [re]installing key "/citadel-data/keys/citadel.key" and certificate "/citadel-data/keys/citadel.cer" citserver[7]: crypto: generating a self-signed certificate citserver[7]: crypto: cannot read the private key citserver[7]: crypto: using certificate chain keys/citadel.cer citserver[7]: crypto: SSL_CTX_use_certificate_chain_file failed: No such file or directory citserver[7]: crypto: SSL failed: ../../context.has not been initialized # ls -alh /usr/local/citadel/keys total 8.0K drwx------ 2 root root 4.0K Apr 6 19:35 ./ drwxr-xr-x 6 root root 4.0K Apr 11 19:54 ../ lrwxrwxrwx 1 root root 49 Apr 6 19:35 citadel.cer -> /etc/letsencrypt/live/srv2.tamer.pw/fullchain.pem lrwxrwxrwx 1 root root 47 Apr 6 19:34 citadel.key -> /etc/letsencrypt/live/srv2.tamer.pw/privkey.pem # cat /usr/local/citadel/keys/citadel.cer -----BEGIN CERTIFICATE----- MIIDqzCCAzCgAwIBAgISBub/j7QwilgbIDgv8Lo8lPfeMAoGCCqGSM49BAMDMDIxSo you are right, but the keys are there.
Further more these are the same keys the web browser is using on port 443, meaning the certs are fine. And they are linked to the correct location/folder.What am I missing?
Hm, I think I am on to something. I went in to the container and did following
# ls -al /citadel-data/keys total 8 drwx------ 2 root root 4096 Apr 6 19:35 . drwxr-xr-x 6 root root 4096 Apr 11 20:12 .. lrwxrwxrwx 1 root root 49 Apr 6 19:35 citadel.cer -> /etc/letsencrypt/live/srv2.tamer.pw/fullchain.pem lrwxrwxrwx 1 root root 47 Apr 6 19:34 citadel.key -> /etc/letsencrypt/live/srv2.tamer.pw/privkey.pem # cat /citadel-data/keys/citadel.key cat: /citadel-data/keys/citadel.key: No such file or directory # cat /etc/letsencrypt/live/srv2.tamer.pw/privkey.pem cat: /etc/letsencrypt/live/srv2.tamer.pw/privkey.pem: No such file or directory
citadel can't follow the symlink. symlink is out of the containers reach.
But how does that work on your system?
Ok, so let's say you've got Citadel container running in Docker, using a command like this one:
docker run [other options] --volume=/usr/local/citadel:/citadel-data [other options]
So that means "/usr/local/citadel" on the host is mounted to "/citadel-data" in the container. That means you need to put your private key in /usr/local/citadel/keys/citadel.key , and you need to put your certificate (full chain) in /usr/local/citadel/keys/citadel.cer .
And I'm going to take a guess as to where you might have gone wrong, because I've seen this before: citadel.key and citadel.cer CANNOT BE SYMLINKS to something elsewhere in the host's filesystem, because the container can't follow those symlinks out of the mapped space. So the key and certificate need to be copied there, not linked, if /usr/local/citadel/keys is not their home.
Let me know, how close did I get? :)
citadel can't follow the symlink. symlink is out of the
containers reach.
TWO MINDS, ONE GREAT THOUGHT! :)
citadel can't follow the symlink. symlink is out of the containers reach.
But how does that work on your system?
So I removed the symlink and copied (hard linked) the files in to the keys folder.
Everything works fine now. How is this working on your end?
Hard linking is not a good idea,
Actually I could write a sh script for the Letsencript bot as a "post install, execute" script. That would work.
But lets here from you. How does that work on your end?
On my system, /usr/local/citadel/keys is the native location of the files, so there is no copying or linking involved. Your setup is a little more complex. We can work through some ideas if you want to. Otherwise the best course of action might be to just have a script that detects when there is a new certificate and copies it over.
Just thought of another idea: if copying the keys doesn't sit well with you, you could also map the directory where your keys live to /citadel-data/keys/ in the container. I think that might work.
On my system, /usr/local/citadel/keys is the native location of the files, so there is no copying or linking involved. Your setup is a little more complex. We can work through some ideas if you want to. Otherwise the best course of action might be to just have a script that detects when there is a new certificate and copies it over.
Well, I guess then we discovered a problem with the manual at https://citadel.org/sslcertificates.html
The manual symlinks the keys.
ln -sfv /etc/letsencrypt/live/${HOSTNAME}/privkey.pem /usr/local/citadel/keys/citadel.key
ln -sfv /etc/letsencrypt/live/${HOSTNAME}/fullchain.pem /usr/local/citadel/keys/citadel.cer
On my system, /usr/local/citadel/keys is the native location of the files, so there is no copying or linking involved. Your setup is a little more complex. We can work through some ideas if you want to. Otherwise the best course of action might be to just have a script that detects when there is a new certificate and copies it over.
Well that was what I was talking about. You don't have to detect anything certbot has ''"renewal-hooks" dir with 3 subdirs "deploy", "post", "pre"
So any script you put in to the "post" folder will be executed after certbot renews a cert.
That would be the perfect place to add a copy script.
I think you need to fix the manual tough.
Because anybody who follows that manual will have the same problem. Just most people will not persist in trying to fix it, and they will just walk away from the project.
Here is what it looks like on my server:
# pwd /etc/letsencrypt/renewal-hooks root@srv2 /e/l/renewal-hooks# ls -alh . total 20K drwxr-xr-x 5 root root 4.0K Mar 30 19:57 ./ drwxr-xr-x 7 root root 4.0K Apr 2 15:15 ../ drwxr-xr-x 2 root root 4.0K Mar 30 19:57 deploy/ drwxr-xr-x 2 root root 4.0K Apr 1 17:09 post/ drwxr-xr-x 2 root root 4.0K Apr 1 17:08 pre/ root@srv2 /e/l/renewal-hooks# ls -alh * deploy: total 8.0K drwxr-xr-x 2 root root 4.0K Mar 30 19:57 ./ drwxr-xr-x 5 root root 4.0K Mar 30 19:57 ../ post: total 12K drwxr-xr-x 2 root root 4.0K Apr 1 17:09 ./ drwxr-xr-x 5 root root 4.0K Mar 30 19:57 ../ -rwxr-xr-x 1 root root 31 Apr 1 17:09 citadel.sh* pre: total 12K drwxr-xr-x 2 root root 4.0K Apr 1 17:08 ./ drwxr-xr-x 5 root root 4.0K Mar 30 19:57 ../ -rwxr-xr-x 1 root root 30 Apr 1 17:08 citadel.sh* root@srv2 /e/l/renewal-hooks#
Here is what my hooks look like. I just added the cp hooks. The start stop I had added earlier.
root@srv2 /e/l/renewal-hooks# cat pre/citadel.sh #!/bin/sh docker stop citadel root@srv2 /e/l/renewal-hooks# cat post/citadel.sh #!/bin/sh cp /etc/letsencrypt/live/srv2.tamer.pw/fullchain.pem /usr/local/citadel/keys/citadel.cer cp /etc/letsencrypt/live/srv2.tamer.pw/privkey.pem /usr/local/citadel/keys/citadel.key wait docker start citadel
I updated my citadel manuals, in case you need to look up something:
http://blog.tamer.pw/linux/citadel
How did you change the Lobby /dotskip?room=_BASEROOM_ to
wiki?page=home?
webcit has a "-g" flag that will enter its value as the first command sent to it. (The container has a similar flag that will pass it along to webcit.)
So you can do something like
webcit [other commands] -g "/dotgoto?room=Welcome to UNCENSORED!"
You can put anything in there you want. I chose to go with the welcome wiki because we can control exactly what it says on the front page.
So I been trying to make this work with docker.
This link of mine works. It takes me to the right wiki page. https://mail.hansaray.pw/dotgoto?room=Welcome
However this does not work. When someone logs in the first page is still the _BASEROOM_
docker run -d --restart=unless-stopped --network host \
--volume=/usr/local/citadel:/citadel-data \
--volume=/usr/local/webcit/.well-known:/usr/local/webcit/.well-known \
--volume=/usr/local/webcit/static.local:/usr/local/webcit/static.local \
--name=citadel citadeldotorg/citadel --http-port=8080 --https-port=8443 -g "/dotgoto?room=Welcome"
Subject: Re: Citadel Server sudden stop and restart fails with error DBD2055
Fr Apr 11 2025 13:22:11 UTC von IGnatius T Foobar Betreff: Re: Citadel Server sudden stop and restart fails with error DBD2055these in the folder. Can this be related to database corruption or I/O
delays?
Yes, once the system has been running for a while it will clean up all log files that have been fully committed to the database. Usually this is rock solid so I'm wondering if you had a resource problem of some sort (disk full, etc)
Disk is not a problem, less than 50%. Did restored the DB again from backup. Server was running for a couple of hours after. Then this morning again sudden stopand down. The server was restarted automatically but the restarts then happen every couple of minutes. Restart count was eventually at 235. Up to restart 230 or so there was only one log.XXX file in the data folder, but then accumulated rapidly. Below is the last log files that caused the first automatic restart with error code 11. Interesting part is the message not found just before the crash, can this cause the issue? Or is this some sort of attack? Does any of this make sense?
Apr 13 10:55:14 [hostname] citserver[701535]: citserver[701535]: context: session 7508 (SMTP-MTA) ended.
Apr 13 10:55:14 [hostname] citserver[701535]: context: session 7508 (SMTP-MTA) ended.
Apr 13 10:55:16 [hostname] citserver[701535]: citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:21 [hostname] citserver[701535]: citserver[701535]: context: session (SMTP-MTA) started from 80.94.95.228 (80.94.95.228) uid=-1
Apr 13 10:55:21 [hostname] citserver[701535]: context: session (SMTP-MTA) started from 80.94.95.228 (80.94.95.228) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: context: session (SMTP-MTA) started from 167.94.146.61 (167.94.146.61) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: context: session (SMTP-MTA) started from 167.94.146.61 (167.94.146.61) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: crypto: TLS using TLS_CHACHA20_POLY1305_SHA256 on TLSv1.3 (256 of 256 bits)
Apr 13 10:55:22 [hostname] citserver[701535]: crypto: TLS using TLS_CHACHA20_POLY1305_SHA256 on TLSv1.3 (256 of 256 bits)
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: crypto: ending TLS on this session
Apr 13 10:55:22 [hostname] citserver[701535]: crypto: ending TLS on this session
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: SMTP: client disconnected: ending session.
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: context: session 7510 (SMTP-MTA) ended.
Apr 13 10:55:22 [hostname] citserver[701535]: SMTP: client disconnected: ending session.
Apr 13 10:55:22 [hostname] citserver[701535]: context: session 7510 (SMTP-MTA) ended.
Apr 13 10:55:26 [hostname] systemd[1]: citadel.service: Main process exited, code=killed, status=11/SEGV
Apr 13 10:55:26 [hostname] systemd[1]: citadel.service: Failed with result 'signal'.
Apr 13 10:55:27 [hostname] systemd[1]: citadel.service: Scheduled restart job, restart counter is at 1.
Apr 13 10:55:27 [hostname] systemd[1]: Stopped Citadel Server.
Apr 13 10:55:27 [hostname] systemd[1]: Started Citadel Server.
Apr 13 10:55:27 [hostname] citserver:
Apr 13 10:55:27 [hostname] citserver:
Apr 13 10:55:27 [hostname] citserver:*** Citadel server engine ***
Apr 13 10:55:27 [hostname] citserver:Version 998 (build 24041) ***
Subject: Re: Citadel Server sudden stop and restart fails with error DBD2055
To add to this I further analysed the syslog and found that within 8 hours after the first crash there were three restarts after signal 11 (SEGV). After the third restart the fourth server crash was with signal 6 (ABRT) after DB runs out of lock entries:
Apr 13 18:23:03 [hostname] citserver[711154]: citserver[711154]: bdb: BDB2055 Lock table is out of available lock entries
Apr 13 18:23:03 [hostname] citserver[711154]: citserver[711154]: bdb: bdb_store(09): error 12: Cannot allocate memory
Apr 13 18:23:03 [hostname] citserver[711154]: bdb: BDB2055 Lock table is out of available lock entries
Apr 13 18:23:03 [hostname] citserver[711154]: bdb: bdb_store(09): error 12: Cannot allocate memory
Apr 13 18:23:03 [hostname] systemd[1]: citadel.service: Main process exited, code=killed, status=6/ABRT
From there the server restarted and survived for about another 3 minutes accepting connections. Then restarted 231 times generating around 6.500 log files within 19 minutes.
Looks like one of the SEGV somehow corrupt or block the DB. The lock table issue is just the fallout. Is there a log where we can see what causes signal 11 (SEGV)?
So Apr 13 2025 17:20:30 UTC von p.agsten Betreff: Re: Citadel Server sudden stop and restart fails with error DBD2055
Fr Apr 11 2025 13:22:11 UTC von IGnatius T Foobar Betreff: Re: Citadel Server sudden stop and restart fails with error DBD2055these in the folder. Can this be related to database corruption or I/O
delays?
Yes, once the system has been running for a while it will clean up all log files that have been fully committed to the database. Usually this is rock solid so I'm wondering if you had a resource problem of some sort (disk full, etc)Disk is not a problem, less than 50%. Did restored the DB again from backup. Server was running for a couple of hours after. Then this morning again sudden stopand down. The server was restarted automatically but the restarts then happen every couple of minutes. Restart count was eventually at 235. Up to restart 230 or so there was only one log.XXX file in the data folder, but then accumulated rapidly. Below is the last log files that caused the first automatic restart with error code 11. Interesting part is the message not found just before the crash, can this cause the issue? Or is this some sort of attack? Does any of this make sense?
Apr 13 10:55:14 [hostname] citserver[701535]: citserver[701535]: context: session 7508 (SMTP-MTA) ended.
Apr 13 10:55:14 [hostname] citserver[701535]: context: session 7508 (SMTP-MTA) ended.
Apr 13 10:55:16 [hostname] citserver[701535]: citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:21 [hostname] citserver[701535]: citserver[701535]: context: session (SMTP-MTA) started from 80.94.95.228 (80.94.95.228) uid=-1
Apr 13 10:55:21 [hostname] citserver[701535]: context: session (SMTP-MTA) started from 80.94.95.228 (80.94.95.228) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: context: session (SMTP-MTA) started from 167.94.146.61 (167.94.146.61) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: context: session (SMTP-MTA) started from 167.94.146.61 (167.94.146.61) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: crypto: TLS using TLS_CHACHA20_POLY1305_SHA256 on TLSv1.3 (256 of 256 bits)
Apr 13 10:55:22 [hostname] citserver[701535]: crypto: TLS using TLS_CHACHA20_POLY1305_SHA256 on TLSv1.3 (256 of 256 bits)
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: crypto: ending TLS on this session
Apr 13 10:55:22 [hostname] citserver[701535]: crypto: ending TLS on this session
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: SMTP: client disconnected: ending session.
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: context: session 7510 (SMTP-MTA) ended.
Apr 13 10:55:22 [hostname] citserver[701535]: SMTP: client disconnected: ending session.
Apr 13 10:55:22 [hostname] citserver[701535]: context: session 7510 (SMTP-MTA) ended.
Apr 13 10:55:26 [hostname] systemd[1]: citadel.service: Main process exited, code=killed, status=11/SEGV
Apr 13 10:55:26 [hostname] systemd[1]: citadel.service: Failed with result 'signal'.
Apr 13 10:55:27 [hostname] systemd[1]: citadel.service: Scheduled restart job, restart counter is at 1.
Apr 13 10:55:27 [hostname] systemd[1]: Stopped Citadel Server.
Apr 13 10:55:27 [hostname] systemd[1]: Started Citadel Server.
Apr 13 10:55:27 [hostname] citserver:
Apr 13 10:55:27 [hostname] citserver:
Apr 13 10:55:27 [hostname] citserver:*** Citadel server engine ***
Apr 13 10:55:27 [hostname] citserver:Version 998 (build 24041) ***
Subject: Re: Citadel Server sudden stop and restart fails with error DBD2055
I had a similar problem this late afternoon. I had to recover the data files from a backup due a full disk. I reported a similar problem some time ago.
Sun Apr 13 2025 17:54:25 UTC from p.agsten Subject: Re: Citadel Server sudden stop and restart fails with error DBD2055To add to this I further analysed the syslog and found that within 8 hours after the first crash there were three restarts after signal 11 (SEGV). After the third restart the fourth server crash was with signal 6 (ABRT) after DB runs out of lock entries:
Apr 13 18:23:03 [hostname] citserver[711154]: citserver[711154]: bdb: BDB2055 Lock table is out of available lock entries
Apr 13 18:23:03 [hostname] citserver[711154]: citserver[711154]: bdb: bdb_store(09): error 12: Cannot allocate memory
Apr 13 18:23:03 [hostname] citserver[711154]: bdb: BDB2055 Lock table is out of available lock entries
Apr 13 18:23:03 [hostname] citserver[711154]: bdb: bdb_store(09): error 12: Cannot allocate memory
Apr 13 18:23:03 [hostname] systemd[1]: citadel.service: Main process exited, code=killed, status=6/ABRT
From there the server restarted and survived for about another 3 minutes accepting connections. Then restarted 231 times generating around 6.500 log files within 19 minutes.
Looks like one of the SEGV somehow corrupt or block the DB. The lock table issue is just the fallout. Is there a log where we can see what causes signal 11 (SEGV)?
So Apr 13 2025 17:20:30 UTC von p.agsten Betreff: Re: Citadel Server sudden stop and restart fails with error DBD2055
Fr Apr 11 2025 13:22:11 UTC von IGnatius T Foobar Betreff: Re: Citadel Server sudden stop and restart fails with error DBD2055these in the folder. Can this be related to database corruption or I/O
delays?
Yes, once the system has been running for a while it will clean up all log files that have been fully committed to the database. Usually this is rock solid so I'm wondering if you had a resource problem of some sort (disk full, etc)Disk is not a problem, less than 50%. Did restored the DB again from backup. Server was running for a couple of hours after. Then this morning again sudden stopand down. The server was restarted automatically but the restarts then happen every couple of minutes. Restart count was eventually at 235. Up to restart 230 or so there was only one log.XXX file in the data folder, but then accumulated rapidly. Below is the last log files that caused the first automatic restart with error code 11. Interesting part is the message not found just before the crash, can this cause the issue? Or is this some sort of attack? Does any of this make sense?
Apr 13 10:55:14 [hostname] citserver[701535]: citserver[701535]: context: session 7508 (SMTP-MTA) ended.
Apr 13 10:55:14 [hostname] citserver[701535]: context: session 7508 (SMTP-MTA) ended.
Apr 13 10:55:16 [hostname] citserver[701535]: citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:16 [hostname] citserver[701535]: msgbase: message #93631 was not found
Apr 13 10:55:21 [hostname] citserver[701535]: citserver[701535]: context: session (SMTP-MTA) started from 80.94.95.228 (80.94.95.228) uid=-1
Apr 13 10:55:21 [hostname] citserver[701535]: context: session (SMTP-MTA) started from 80.94.95.228 (80.94.95.228) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: context: session (SMTP-MTA) started from 167.94.146.61 (167.94.146.61) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: context: session (SMTP-MTA) started from 167.94.146.61 (167.94.146.61) uid=-1
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: crypto: TLS using TLS_CHACHA20_POLY1305_SHA256 on TLSv1.3 (256 of 256 bits)
Apr 13 10:55:22 [hostname] citserver[701535]: crypto: TLS using TLS_CHACHA20_POLY1305_SHA256 on TLSv1.3 (256 of 256 bits)
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: crypto: ending TLS on this session
Apr 13 10:55:22 [hostname] citserver[701535]: crypto: ending TLS on this session
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: SMTP: client disconnected: ending session.
Apr 13 10:55:22 [hostname] citserver[701535]: citserver[701535]: context: session 7510 (SMTP-MTA) ended.
Apr 13 10:55:22 [hostname] citserver[701535]: SMTP: client disconnected: ending session.
Apr 13 10:55:22 [hostname] citserver[701535]: context: session 7510 (SMTP-MTA) ended.
Apr 13 10:55:26 [hostname] systemd[1]: citadel.service: Main process exited, code=killed, status=11/SEGV
Apr 13 10:55:26 [hostname] systemd[1]: citadel.service: Failed with result 'signal'.
Apr 13 10:55:27 [hostname] systemd[1]: citadel.service: Scheduled restart job, restart counter is at 1.
Apr 13 10:55:27 [hostname] systemd[1]: Stopped Citadel Server.
Apr 13 10:55:27 [hostname] systemd[1]: Started Citadel Server.
Apr 13 10:55:27 [hostname] citserver:
Apr 13 10:55:27 [hostname] citserver:
Apr 13 10:55:27 [hostname] citserver:*** Citadel server engine ***
Apr 13 10:55:27 [hostname] citserver:Version 998 (build 24041) ***
Well, I guess then we discovered a problem with the manual at
https://citadel.org/sslcertificates.html
Thanks for pointing that out. It has been updated.
I am starting to wonder whether we should just put a copy of certbot inside the container and let it run.
So any script you put in to the "post" folder will be executed after
certbot renews a cert.
That would be the perfect place to add a copy script.
Actually the "deploy" directory is the correct place to put it.
And ... duh ... it looks like I did exactly that when I containerized. I was obviously not paying attention to the documentation at the time.
\ --name=citadel citadeldotorg/citadel --http-port=8080
--https-port=8443 -g "/dotgoto?room=Welcome"
It looks correct ... the only thing I can think of is maybe the question mark is getting escaped by the shell since you're using double quotes instead of single quotes?
For reference, here is my launch command:
docker run \
-d \
--restart=unless-stopped \
--network host \
--volume=/usr/local/citadel:/citadel-data \
--volume=/usr/local/webcit/.well-known:/usr/local/webcit/.well-known \
--name=citadel \
citadeldotorg/citadel -g '/dotgoto?room=Welcome to UNCENSORED!'
I've got a question mark, spaces, and an exclamation point in there.
Subject: Re: Citadel Server sudden stop and restart fails with error DBD2055
I had a similar problem this late afternoon. I had to recover the data
files from a backup due a full disk. I reported a similar problem some
time ago.
You know what, I'm going to add something to the server to make it refuse to accept new messages when the free disk space is under 100 MB. It sure seems better than seeing people have problems *after* they run out of disk.
Subject: Re: Citadel Server sudden stop and restart fails with error DBD2055
Mon Apr 14 2025 01:39:12 UTC from IGnatius T Foobar Subject: Re: Citadel Server sudden stop and restart fails with error DBD2055I had a similar problem this late afternoon. I had to recover the data
files from a backup due a full disk. I reported a similar problem some
time ago.
You know what, I'm going to add something to the server to make it refuse to accept new messages when the free disk space is under 100 MB. It sure seems better than seeing people have problems *after* they run out of disk.
Dear All,
What I think that will help a lot from my experience dealing with this is doing the following:
It is doing Auto-Purging from 15 minutes in 15 minutes instead only one time a day.
| Hour to run database auto-purge |
Thanks,
Luís Gonçalves