This is a backend API for managing bookmarks with labels and thumbnail support.
- User authentication (login/signup)
- Create, read, update, delete bookmarks
- Add/remove labels to bookmarks
- Create, read, update, delete lists
- Add/remove bookmarks to lists
- Automatic thumbnail extraction from bookmarked webpages
- Import and export functionality for bookmarks, labels, and lists
The system automatically extracts thumbnail images for bookmarks:
- When a user creates a bookmark without providing a thumbnail URL, the system adds the bookmark to the database with a
nullthumbnail. - Simultaneously, an asynchronous job is queued to fetch the webpage content and extract a suitable thumbnail URL.
- The job looks for common meta tags that contain image URLs, such as:
- Open Graph (
og:image) - Twitter Card (
twitter:image) - Link with
rel="image_src" - Article image (
article:image)
- Open Graph (
- If a suitable image URL is found, the bookmark record is updated in the database.
- The next time the client fetches the bookmark, the thumbnail URL will be included in the response.
This approach ensures that:
- Bookmark creation is fast and not blocked by the thumbnail extraction process
- Thumbnails are extracted in the background without affecting the user experience
- If thumbnail extraction fails, the bookmark still exists with a
nullthumbnail value
- Node.js
- PostgreSQL
- Redis (for job queue)
Create a .env file with the following variables:
NODE_ENV=development
DB_HOST=localhost
DB_NAME=bookmarks
DB_USER=postgres
DB_PASSWORD=yourpassword
SERVER_PORT=3000
SESSION_SECRET=yoursecret
REDIS_HOST=localhost
REDIS_PORT=6379
-
Install dependencies:
npm install -
Run migrations:
npm run migrate -
Start the server:
npm start
To run with Docker:
docker-compose up
I am currently running Nextcloud self-hosted, but I am on the process of replacing it with TrueNAS Core.
However, I've grown accustomed to using a couple of apps within Nextcloud, one of them being Nextcloud Bookmarks.
After looking around and not being convinced with any of the existing OSS solutions, I decided the scope is small enough to warrant me writing my own little server that I could self-host too.
To run the application in development mode (includes Adminer interface):
# Start all services including Adminer
docker-compose --profile dev upTo run the application in production mode (without Adminer):
# Start only the necessary services
docker-compose --profile prod upThe services will start in the following order:
- Database (PostgreSQL)
- Migrations (runs database schema setup)
- Backend server
- Adminer (development mode only)
You can access:
- Adminer UI via
http://localhost:8080(development only) - Backend API via
http://localhost:${SERVER_PORT} - Database on the default PostgreSQL port
5432
If you need to run migrations manually:
# Development
docker-compose --profile dev run migrations
# Production
docker-compose --profile prod run migrationsThe services can be configured using environment variables:
DB_USER: Database userDB_PASSWORD: Database passwordDB_NAME: Database nameSERVER_PORT: Backend server portNODE_ENV: Environment (development/production)
You can set these variables in your shell before running docker-compose, or create a .env file in the project root. For example:
export DB_USER=myuser
export DB_PASSWORD=mypassword
export DB_NAME=mydb
export SERVER_PORT=3001
export NODE_ENV=productionThese same values will need to be set in your application's .env file for the server to connect to the database.
npm run migratewill seed the database and run all existing migrations. You will need to run this command the first time you are running the database.
-
npm startwill give you a nodemon process watching the TS files and running the node server on the port specified bySERVER_PORTon your.envfile. -
npm run testwill run the Jest tests available.
The API is documented using the OpenAPI 3.1.1 specification. You can find the complete API documentation in the spec/openapi.yaml file. This specification includes:
- Detailed endpoint descriptions
- Request/response schemas
- Authentication requirements
- Error handling
All authenticated endpoints require a valid session cookie (connect.sid).
The API provides endpoints to export and import bookmark data, including bookmarks, labels, and lists.
Endpoint: GET /export
This endpoint allows users to export all their bookmarks, labels, and lists in a single JSON file. The export includes:
- All bookmarks with their associated labels
- All labels
- All lists with their associated bookmarks
The export process preserves all relationships between entities:
- Labels include references to their bookmarks (IDs only)
- Lists include references to their bookmarks (IDs only)
To avoid data duplication, relationships only include the necessary identifier references rather than duplicating the entire entity data.
User IDs (user_id) are automatically removed from the exported data to allow for easy importing by different users.
The response is a JSON file with the appropriate headers for file download.
Endpoint: POST /import
This endpoint allows users to import bookmarks, labels, and lists from a previously exported JSON file.
Request Body:
{
"version": "1.0",
"exportDate": "2023-06-01T12:00:00.000Z",
"labels": [
{
"id": "original-label-id",
"name": "Work",
"bookmarks": [
{
"id": "original-bookmark-id"
}
]
}
],
"bookmarks": [
{
"id": "original-bookmark-id",
"url": "https://example.com",
"title": "Example Website",
"thumbnail": "https://example.com/image.jpg"
}
],
"lists": [
{
"id": "original-list-id",
"name": "Reading List",
"description": "Articles to read later",
"bookmarks": [
{
"id": "original-bookmark-id"
}
]
}
]
}The import process:
- Creates labels first
- Creates bookmarks and associates them with labels
- Creates lists and adds bookmarks to them
The import process maintains all relationships between entities:
- Label associations with bookmarks are preserved
- Bookmark associations with lists are preserved
During import, all relationships are reconstructed by mapping the original IDs from the export file to the newly generated IDs in the database.
All imported entities are automatically assigned to the current user, regardless of which user originally exported the data. This allows for sharing bookmark collections between users or restoring data to a new account.
The response includes statistics about the import operation, including counts of successfully created items and any errors encountered.
The import format is a JSON object with the following structure:
{
"version": "1.0",
"exportDate": "2023-06-01T12:00:00.000Z",
"labels": [
{
"id": "original-label-id",
"name": "Work",
"bookmarks": [
{
"id": "original-bookmark-id"
}
]
}
],
"bookmarks": [
{
"id": "original-bookmark-id",
"url": "https://example.com",
"title": "Example Website",
"thumbnail": "https://example.com/image.jpg"
}
],
"lists": [
{
"id": "original-list-id",
"name": "Reading List",
"description": "Articles to read later",
"bookmarks": [
{
"id": "original-bookmark-id"
}
]
}
]
}During import, new IDs are generated for all items, and relationships are maintained based on the original IDs.