joplin/server.env-sample (located here) to the location of your Docker configuration files. Example: /home/[user]/docker.env-sample to .env.shelldocker run --env-file .env -p 22300:22300 joplin/server:latest
The server will listen on port 22300 on localhost. By default, the server will use SQLite, which allows you to test the app without setting up a database. When running the server for production use, you should connect the container to a database, as described below.
The following tags are available:
latest is always the most recent released versionbeta is always the most recent beta released version2, 2-beta2.1, 2.2, 2.3-beta2.0.4, 2.2.8-betaYou can setup the container to either use an existing PostgreSQL server, or connect it to a new database using docker-compose.
To use an existing PostgresSQL server, you can variables in the .env file. Either:
confDB_CLIENT=pg POSTGRES_PASSWORD=joplin POSTGRES_DATABASE=joplin POSTGRES_USER=joplin POSTGRES_PORT=5432 POSTGRES_HOST=localhost
confDB_CLIENT=pg POSTGRES_CONNECTION_STRING=postgresql://username:password@your_joplin_postgres_server:5432/joplin
Ensure that the provided database and user exist as Joplin Server will not create them. When running on macOS or Windows through Docker Desktop, a mapping of localhost is made automatically. On Linux, you can add --net=host --add-host=host.docker.internal:127.0.0.1 to the docker run command line to make the mapping happen. Any other POSTGRES_HOST than localhost or 127.0.0.1 should work as expected without further action.
This step is optional.
Configuring a reverse proxy is not required for core functionality and is only required if Joplin Server needs to be accessible over the internet. See the following documentation for configuring a reverse proxy with Apache or Nginx.
This step is optional.
By default, the item contents (notes, tags, etc.) are stored in the database and no additional steps are required to get that working.
However, since that content can be quite large, you have the option to store it outside the database by setting the STORAGE_DRIVER environment variable.
This step is optional.
To save item contents (notes, tags, etc.) to the local filesystem instead, use:
STORAGE_DRIVER=Type=Filesystem; Path=/path/to/dir
After this is set, all item contents will be saved under the defined /path/to/dir directory.
This step is optional.
Migrating storage is a bit more complicated because the old content will have to be migrated to the new storage. This is done by providing a fallback driver, which tells the server where to look if a particular item is not yet available on the new storage.
To migrate from the database to the file system, you would set the environment variables as follows:
STORAGE_DRIVER=Type=Filesystem; Path=/path/to/dir STORAGE_DRIVER_FALLBACK=Type=Database; Mode=ReadAndWrite
From then on, all new and updated content will be added to the filesystem storage. When reading an item, if the server cannot find it in the filesystem, it will look for it in the database.
Fallback drivers have two write modes:
In ReadAndClear mode, it's going to clear the fallback driver content every time an item is moved to the main driver. It means that over time the old storage will be cleared and all content will be on the new storage.
In ReadAndWrite mode, it's going to write the content to the fallback driver too. This is purely for safey - it allows deploying the new storage (such as the filesystem or S3) but still keep the old storage up-to-date. So if something goes wrong it's possible to go back to the old storage until the new one is working.
It's recommended to start with ReadAndWrite mode.
This simple setup with main and fallback driver is sufficient to start using a new storage, however old content that never gets updated will stay on the database. To migrate this content too, you can use the storage import command. It takes a connection string and move all items from the old storage to the new one.
For example, to move all content from the database to the filesytem:
docker exec -it CONTAINER_ID node packages/server/dist/app.js storage import --connection 'Type=Filesystem; Path=/path/to/dir'
On the database, you can verify that all content has been migrated by running this query:
sqlSELECT count(*), content_storage_id FROM items GROUP BY content_storage_id;
If everything went well, all items should have a content_storage_id > 1 ("1" being the database).
Besides the database and filesystem, it's also possible to use AWS S3 for storage using the same environment variable:
STORAGE_DRIVER=Type=S3; Region=YOUR_REGION_CODE; AccessKeyId=YOUR_ACCESS_KEY; SecretAccessKeyId=YOUR_SECRET_ACCESS_KEY; Bucket=YOUR_BUCKET
Once Joplin Server is exposed to the internet, open the admin UI. For the following instructions, we'll assume that Joplin Server is running on [***].
If Joplin Server is running running locally only, access the Admin Page using http://[hostname]:22300
By default, Joplin Server will be setup with an admin user with email admin@localhost and password admin. For security purposes, the admin user's credentials should be changed. On the Admin Page, login as the admin user. In the upper right, select the Profile button update the admin password.
While the admin user can be used for synchronisation, it is recommended to create a separate non-admin user for it. To do so, navigate to the Users page - from there you can create a new user. Once this is done, you can use the email and password you specified to sync this user account with your Joplin clients.
Checking the log can be done the standard Docker way:
bash# With Docker: docker logs --follow CONTAINER # With docker-compose: docker-compose --file docker-compose.server.yml logs
By default the server supports SQLite for development, so nothing needs to be setup.
To use Postgres, from the monorepo root, run docker-compose --file docker-compose.server-dev.yml up, which will start the PostgreSQL database.
From packages/server, run npm run start-dev
View the changelog
See LICENSE.md




manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务