-
Notifications
You must be signed in to change notification settings - Fork 58
/
README
148 lines (120 loc) · 5.27 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
List of all internal functions to be organized
Low Level Storage Commands
==========================
These are the filesystem abstraction needed by lit's local database.
storage.write(path, raw) - Write mutable data by path
storage.put(path, raw) - Write immutable data by path
storage.read(path) -> raw - Read mutable data by path (nil if not found)
storage.delete(path) - Delete an entry (removes empty parent directories)
storage.nodes(path) -> iter - Iterate over node children of path
(empty iter if not found)
storage.leaves(path) -> iter - Iterate over node children of path
(empty iter if not found)
Mid Level Storage Commands
=========================
These commands work at a higher level and consume the low-level storage APIs.
db.has(hash) -> bool - check if db has an object
db.load(hash) -> raw - load raw data, nil if not found
db.loadAny(hash) -> kind, value - pre-decode data, error if not found
db.loadAs(kind, hash) -> value - pre-decode and check type or error
db.save(raw) -> hash - save pre-encoded and framed data
db.saveAs(kind, value) -> hash - encode, frame and save to objects/$ha/$sh
db.hashes() -> iter - Iterate over all hashes
db.match(author, name, version)
-> match, hash - Find the best version matching the query.
db.read(author, name, version) -> hash - Read from refs/tags/$author/$tag/v$version
db.write(author, name, version, hash) - Write to refs/tags/$suthor/$tag/v$version
db.authors() -> iter - Iterate over refs/tags/*
db.names(author) -> iter - Iterate nodes in refs/tags/$author/**
db.versions(author, name) -> iter - Iterate leaves in refs/tags/$author/$tag/*
db.readKey(author, fingerprint) -> key - Read from keys/$author/$fingerprint
db.putKey(author, fingerprint, key) - Write to keys/$author/$fingerprint
db.revokeKey(author, fingerprint) - Delete keys/$author/$fingerprint
db.fingerprints(author) -> iter - iter of fingerprints
db.getEtag(author) -> etag - Read keys/$author.etag
db.setEtag(author, etag) - Writes keys/$author.etag
db.owners(org) -> iter - Iterates lines of keys/$org.owners
db.isOwner(org, author) -> bool - Check if a user is an org owner
db.addOwner(org, author) - Add a new owner
db.removeOwner(org, author) - Remove an owner
db.import(fs, path) -> kind, hash - Import a file or tree into database
db.export(hash, path) -> kind - Export a hash to a path
Remote Enhanced DB
==================
When an upstream is configured, the db interface has the following additions.
rdb.load(hash) -> raw - calls fetch when not found and tries a second time.
rdb.match(author, name, version)
-> match, hash - Also checks upstream for match and uses higher of the two
rdb.readRemote(author, name, version)
-> hash - Read hash from remote only.
rdb.fetch(hash) - fetch a hash and all dependents from upstream.
rdb.push(hash) - push a hash and all children to upstream.
rdb.upquery(name, request) - send arbitrary queries to the upstream server.
Package Metadata Commands
================
These commands work with packages metadata.
pkg.query(fs, path) -> meta, path - Query an on-disk path for package info.
pkg.queryDb(db, path) -> meta, kind - Query an in-db hash for package info.
pky.normalize(meta) -> author, tag, version - Extract and normalize pkg info
Core Functions
==============
These are the high-level actions. This consumes a database instance
core.tag(path, name, email, key)
-> author, tag, version, hash - Import a package complete with signed tag.
REST API
========
This is a simple rest API for reading the remote database over HTTP.
It uses hypermedia in the JSON responses to make linking between requests simple.
GET / -> api json {
blobs = "/blobs/{hash}"
trees = "/trees/{hash}"
packages = "/packages{/author}{/tag}{/version}"
}
GET /blobs/$HASH -> raw data
GET /trees/$HASH -> tree json {
foo = { mode = 0644, hash = "...", url="/blobs/..." }
bar = { mode = 0755, hash = "...", url="/trees/..." }
...
}
GET /packages -> authors json {
creationix = "/packages/creationix"
...
}
GET /packages/$AUTHOR -> tags json {
git = "/packages/creationix/git"
...
}
GET /packages/$AUTHOR/$TAG -> versions json {
v0.1.2 = "/packages/creationix/git/v0.1.2"
...
}
GET /packages/$AUTHOR/$TAG/$VERSION -> tag json {
hash = "..."
object = "..."
object_url = "/trees/..."
type = "tree"
tag = "v0.2.3"
tagger = {
name = "Tim Caswell",
email = "[email protected]",
date = {
seconds = 1423760148
offset = -0600
}
}
message = "..."
}
GET /packages/$AUTHOR/$TAG/latest -> tag json of the most recent version
GET /packages/$AUTHOR/$TAG/$VERSION.zip -> zip bundle of app and dependencies
GET /packages/$AUTHOR/$TAG/latest.zip -> zip bundle of the most recent version
GET /search/$query -> list of matches
Server API Handlers
===================
handlers.read
handlers.match
handlers.wants
handlers.want
handlers.send
handlers.claim
handlers.share
handlers.unclaim