- Docker
- docker-compose 1.6+ (for deployment)
- Ruby 1.9.3+ (for Rake)
- Leiningen 2.0
TODO
First, the database server should be started with rake dev:db:start
(or of course, you may bring your own PostgreSQL, just ensure configuration
in the Rakefile and profiles.clj is updated accordingly).
There are migrations that must be run to get the database set
up. First, run the dev:db:prepare task to create the required
schema_migrations table. To view pending migrations, run rake dev:db:show_pending. This should print a list of all database
migrations that need to be run. To apply them, simply run rake dev:db:run_migrations.
Since I am typically in tmux, the Rake task dev:start is configured to
split the current tmux window and run both the backend and frontend
services. If you are not using tmux for some reason or would like to
start only one of the services, the following commands will do that:
lein run- start the serverlein figwheel- start Figwheel for ClojureScript
A docker-compose.yml file is provided that includes
configuration for both the database and web application containers.
A Rake task for starting the containers locally is not currently implemented.
To start them, all you should need to do is build the JAR with
lein uberjar and docker-compose up.
By default, port 3104 is mapped to the container port 3000. This will
probably be made configurable (along with a bunch more) in the future.
At this point, you should be able to point your browser to http://localhost:3104
and if everything went well, the page should load successfully.
Right now all deployment configuration is hardcoded, but eventually it will
probably be moved to resources/config.yml.
The following are options related to deployment, and are found in $config[:deploy]:
dbcontainer_name- the database container nameusername- the database userpassword- the database password
remote_path- where the project files will be pushedremote_user- the deploy userssh_port- the SSH port of the serverhost- the hostname or IP address of the server
To deploy, simply run rake prod:deploy. This will do the following:
- Run
lein uberjarto build the JAR (unlessNO_BUILDorNO_SYNCare set) rsyncnecessary files (unlessNO_SYNCis set):target/uberjar/*.jardocker-compose.ymlDockerfileDockerfile-dbRakefileresources/migrations/resources/bin/
- Bring down current services if they are running
- Build the images
- Start the containers
After that, the services should be running and accessible internally via the configured listen port (default 3104 above). Your webserver (Nginx in my case) should then be given a virtual host to proxy requests, e.g.
server {
listen 80;
server_name ppd.intern.xyzyxyzy.xyz;
client_max_body_size 20m;
access_log /var/log/nginx/ppd.intern.xyzyxyzy.xyz.access.log;
error_log /var/log/nginx/ppd.intern.xyzyxyzy.xyz.error.log;
location / {
proxy_pass http://localhost:3104;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
root /www/intern.xyzyxyzy.xyz/static;
index index.html;
}
The database image is also configured to automatically run all migrations
located in resources/migrations.
| Route | Method | Body | URL parameters | Route parameters | Response |
|---|---|---|---|---|---|
/links |
GET | Optional query JSON | Links matching criteria | ||
/links/:source |
GET | source |
Links from given :source |
||
/links/:source |
POST | Links JSON | optional initial tag |
source |
Total links imported or error |
/links/:id |
GET | id |
Link by :id |
||
/links/:id |
DELETE | id |
Delete link :id |
||
/links/:id/tags |
GET | id |
Get links tagged with :tag |
||
/links/:id/tags/:tag |
POST | id tag |
Tag link | ||
/links/:id/tags/:tag |
DELETE | id tag |
Remove tag from link |
| Route | Method | Body | URL parameters | Route parameters | Response |
|---|---|---|---|---|---|
/tags |
GET | Number of links per tag |
Currently, to add links, simply send a JSON dumped right from the source
to /links/:source. By specifying :source, the system will attempt to
transform the input into the general format defined for links. Only reddit
is supported at the moment, but it seems to be working fine.
For example:
curl \
-XPOST \
-H 'Content-type: application/json' \
-d @resources/liked.json \
http://localhost:3104/links/reddit?tag=liked
A Rake task is included to do this, and takes filename, source, and an optional tag:
rake import[resources/liked.json,reddit,liked]
A UNIQUE INDEX has been created for links' external_ids to prevent
duplicates. If any are encountered, updates will simply be merged in.
Links can be queried using JSON, or in the case of going through the frontend, a simple query language that will generate it.
The structure is fairly straightforward:
{
"order": {
"field": "title",
"direction": "asc",
"type": "order"
},
"query": {
"where": [
{
"cmp": "like",
"field": "properties.subreddit",
"value": "woah",
"type": "where"
}
]
}
}
To simplify things, a simple query language is implemented, e.g.
where properties.subreddit ~ woah
order title asc
Valid operators for where are =, ~, and /. The ~ operator
accepts a regex and is case insensitive. All link fields should be
queryable, including properties, whose keys are accessible using .
as shown above. order takes a field name and an optional direction.
tagged will fetch links with the given tag, e.g.
tagged dead
becomes
{
"query": {
"tagged": [
"dead"
]
}
}
and will return links tagged dead.
Queries can be sent to /links by sending the above JSON. There is also
a Rake task :search that accepts a field, term, and optional show
that will perform the required curl command and print the JSON response.
If jq is installed, it will pretty print, and if show (delimited by colon)
fields are provided, will print only those.