1 | # Introduction |
---|
2 | |
---|
3 | Ugarit is a backup/archival system based around content-addressible storage. |
---|
4 | |
---|
5 | This allows it to upload incremental backups to a remote server or a |
---|
6 | local filesystem such as an NFS share or a removable hard disk, yet |
---|
7 | have the archive instantly able to produce a full snapshot on demand |
---|
8 | rather than needing to download a full snapshot plus all the |
---|
9 | incrementals since. The content-addressible storage technique means |
---|
10 | that the incrementals can be applied to a snapshot on various kinds of |
---|
11 | storage without needing intelligence in the storage itself - so the |
---|
12 | snapshots can live within Amazon S3 or on a removable hard disk. |
---|
13 | |
---|
14 | Also, the same storage can be shared between multiple systems that all |
---|
15 | back up to it - and the incremental upload algorithm will mean that |
---|
16 | any files shared between the servers will only need to be uploaded |
---|
17 | once. If you back up a complete server, than go and back up another |
---|
18 | that is running the same distribution, then all the files in `/bin` |
---|
19 | and so on that are already in the storage will not need to be backed |
---|
20 | up again; the system will automatically spot that they're already |
---|
21 | there, and not upload them again. |
---|
22 | |
---|
23 | ## So what's that mean in practice? |
---|
24 | |
---|
25 | You can run Ugarit to back up any number of filesystems to a shared |
---|
26 | archive, and on every backup, Ugarit will only upload files or parts |
---|
27 | of files that aren't already in the archive - be they from the |
---|
28 | previous snapshot, earlier snapshots, snapshot of entirely unrelated |
---|
29 | filesystems, etc. Every time you do a snapshot, Ugarit builds an |
---|
30 | entire complete directory tree of the snapshot in the archive - but |
---|
31 | reusing any parts of files, files, or entire directories that already |
---|
32 | exist anywhere in the archive, and only uploading what doesn't already |
---|
33 | exist. |
---|
34 | |
---|
35 | The support for parts of files means that, in many cases, gigantic |
---|
36 | files like database tables and virtual disks for virtual machines will |
---|
37 | not need to be uploaded entirely every time they change, as the |
---|
38 | changed sections will be identified and uploaded. |
---|
39 | |
---|
40 | Because a complete directory tree exists in the archive for any |
---|
41 | snapshot, the extraction algorithm is incredibly simple - and, |
---|
42 | therefore, incredibly reliable and fast. Simple, reliable, and fast |
---|
43 | are just what you need when you're trying to reconstruct the |
---|
44 | filesystem of a live server. |
---|
45 | |
---|
46 | Also, it means that you can do lots of small snapshots. If you run a |
---|
47 | snapshot every hour, then only a megabyte or two might have changed in |
---|
48 | your filesystem, so you only upload a megabyte or two - yet you end up |
---|
49 | with a complete history of your filesystem at hourly intervals in the |
---|
50 | archive. |
---|
51 | |
---|
52 | Conventional backup systems usually either store a full backup then |
---|
53 | incrementals to their archives, meaning that doing a restore involves |
---|
54 | reading the full backup then reading every incremental since and |
---|
55 | applying them - so to do a restore, you have to download *every |
---|
56 | version* of the filesystem you've ever uploaded, or you have to do |
---|
57 | periodic full backups (even though most of your filesystem won't have |
---|
58 | changed since the last full backup) to reduce the number of |
---|
59 | incrementals required for a restore. Better results are had from |
---|
60 | systems that use a special backup server to look after the archive |
---|
61 | storage, which accept incremental backups and apply them to the |
---|
62 | snapshot they keep in order to maintain a most-recent snapshot that |
---|
63 | can be downloaded in a single run; but they then restrict you to using |
---|
64 | dedicated servers as your archive stores, ruling out cheaply scalable |
---|
65 | solutions like Amazon S3, or just backing up to a removable USB or |
---|
66 | eSATA disk you attach to your system whenever you do a backup. And |
---|
67 | dedicated backup servers are complex pieces of software; can you rely |
---|
68 | on something complex for the fundamental foundation of your data |
---|
69 | security system? |
---|
70 | |
---|
71 | ## System Requirements |
---|
72 | |
---|
73 | Ugarit should run on any POSIX-compliant system that can run [Chicken |
---|
74 | Scheme](http://www.call-with-current-continuation.org/). It stores and |
---|
75 | restores all the file attributes reported by the `stat` system call - |
---|
76 | POSIX mode permissions, UID, GID, mtime, and optionally atime and |
---|
77 | ctime (although the ctime cannot be restored due to POSIX |
---|
78 | restrictions). Ugarit will store files, directories, device and |
---|
79 | character special files, symlinks, and FIFOs. |
---|
80 | |
---|
81 | Support for extended filesystem attributes - ACLs, alternative |
---|
82 | streams, forks and other metadata - is possible, due to the extensible |
---|
83 | directory entry format; support for such metadata will be added as |
---|
84 | required. |
---|
85 | |
---|
86 | Currently, only local filesystem-based archive storage backends are |
---|
87 | complete: these are suitable for backing up to a removable hard disk |
---|
88 | or a filesystem shared via NFS or other protocols. However, the |
---|
89 | backend can be accessed via an SSH tunnel, so a remote server you are |
---|
90 | able to install Ugarit on to run the backends can be used as a remote |
---|
91 | archive. |
---|
92 | |
---|
93 | However, the next backend to be implemented will be one for Amazon S3, |
---|
94 | and an SFTP backend for storing archives anywhere you can ssh |
---|
95 | to. Other backends will be implemented on demand; an archive can, in |
---|
96 | principle, be stored on anything that can store files by name, report |
---|
97 | on whether a file already exists, and efficiently download a file by |
---|
98 | name. This rules out magnetic tapes due to their requirement for |
---|
99 | sequential access. |
---|
100 | |
---|
101 | Although we need to trust that a backend won't lose data (for now), we |
---|
102 | don't need to trust the backend not to snoop on us, as Ugarit |
---|
103 | optionally encrypts everything sent to the archive. |
---|
104 | |
---|
105 | ## Terminology |
---|
106 | |
---|
107 | A Ugarit backend is the software module that handles backend |
---|
108 | storage. An archive is an actual storage system storing actual data, |
---|
109 | accessed through the appropriate backend for that archive. The backend |
---|
110 | may run locally under Ugarit itself, or via an SSH tunnel, on a remote |
---|
111 | server where it is installed. |
---|
112 | |
---|
113 | For example, if you use the recommended "splitlog" filesystem backend, |
---|
114 | your archive might be `/mnt/bigdisk` on the server `prometheus`. The |
---|
115 | backend (which is compiled along with the other filesystem backends in |
---|
116 | the `backend-fs` binary) must be installed on `prometheus`, and Ugarit |
---|
117 | clients all over the place may then use it via ssh to |
---|
118 | `prometheus`. However, even with the filesystem backends, the actual |
---|
119 | storage might not be on `prometheus` where the backend runs - |
---|
120 | `/mnt/bigdisk` might be an NFS mount, or a mount from a storage-area |
---|
121 | network. This ability to delegate via SSH is particularly useful with |
---|
122 | the "cache" backend, which reduces latency by storing a cache of what |
---|
123 | blocks exist in a backend, thereby making it quicker to identify |
---|
124 | already-stored files; a cluster of servers all sharing the same |
---|
125 | archive might all use SSH tunnels to access an instance of the "cache" |
---|
126 | backend on one of them (using some local disk to store the cache), |
---|
127 | which proxies the actual archive storage to an archive on the other |
---|
128 | end of a high-latency Internet link, again via an SSH tunnel. |
---|
129 | |
---|
130 | ## What's in an archive? |
---|
131 | |
---|
132 | An Ugarit archive contains a load of blocks, each up to a maximum size |
---|
133 | (usually 1MiB, although other backends might impose smaller |
---|
134 | limits). Each block is identified by the hash of its contents; this is |
---|
135 | how Ugarit avoids ever uploading the same data twice, by checking to |
---|
136 | see if the data to be uploaded already exists in the archive by |
---|
137 | looking up the hash. The contents of the blocks are compressed and |
---|
138 | then encrypted before upload. |
---|
139 | |
---|
140 | Every file uploaded is, unless it's small enough to fit in a single |
---|
141 | block, chopped into blocks, and each block uploaded. This way, the |
---|
142 | entire contents of your filesystem can be uploaded - or, at least, |
---|
143 | only the parts of it that aren't already there! The blocks are then |
---|
144 | tied together to create a snapshot by uploading blocks full of the |
---|
145 | hashes of the data blocks, and directory blocks are uploaded listing |
---|
146 | the names and attributes of files in directories, along with the |
---|
147 | hashes of the blocks that contain the files' contents. Even the blocks |
---|
148 | that contain lists of hashes of other blocks are subject to checking |
---|
149 | for pre-existence in the archive; if only a few MiB of your |
---|
150 | hundred-GiB filesystem has changed, then even the index blocks and |
---|
151 | directory blocks are re-used from previous snapshots. |
---|
152 | |
---|
153 | Once uploaded, a block in the archive is never again changed. After |
---|
154 | all, if its contents changed, its hash would change, so it would no |
---|
155 | longer be the same block! However, every block has a reference count, |
---|
156 | tracking the number of index blocks that refer to it. This means that |
---|
157 | the archive knows which blocks are shared between multiple snapshots |
---|
158 | (or shared *within* a snapshot - if a filesystem has more than one |
---|
159 | copy of the same file, still only one copy is uploaded), so that if a |
---|
160 | given snapshot is deleted, then the blocks that only that snapshot is |
---|
161 | using can be deleted to free up space, without corrupting other |
---|
162 | snapshots by deleting blocks they share. Keep in mind, however, that |
---|
163 | not all storage backends may support this - there are certain |
---|
164 | advantages to being an append-only archive. For a start, you can't |
---|
165 | delete something by accident! The supplied fs backend supports |
---|
166 | deletion, while the splitlog backend does not yet. However, the actual |
---|
167 | snapshot deletion command hasn't been implemented yet either, so it's |
---|
168 | a moot point for now... |
---|
169 | |
---|
170 | Finally, the archive contains objects called tags. Unlike the blocks, |
---|
171 | the tags contents can change, and they have meaningful names rather |
---|
172 | than being identified by hash. Tags identify the top-level blocks of |
---|
173 | snapshots within the system, from which (by following the chain of |
---|
174 | hashes down through the index blocks) the entire contents of a |
---|
175 | snapshot may be found. Unless you happen to have recorded the hash of |
---|
176 | a snapshot somewhere, the tags are where you find snapshots from when |
---|
177 | you want to do a restore! |
---|
178 | |
---|
179 | Whenever a snapshot is taken, as soon as Ugarit has uploaded all the |
---|
180 | files, directories, and index blocks required, it looks up the tag you |
---|
181 | have identified as the target of the snapshot. If the tag already |
---|
182 | exists, then the snapshot it currently points to is recorded in the |
---|
183 | new snapshot as the "previous snapshot"; then the snapshot header |
---|
184 | containing the previous snapshot hash, along with the date and time |
---|
185 | and any comments you provide for the snapshot, and is uploaded (as |
---|
186 | another block, identified by its hash). The tag is then updated to |
---|
187 | point to the new snapshot. |
---|
188 | |
---|
189 | This way, each tag actually identifies a chronological chain of |
---|
190 | snapshots. Normally, you would use a tag to identify a filesystem |
---|
191 | being backed up; you'd keep snapshotting the filesystem to the same |
---|
192 | tag, resulting in all the snapshots of that filesystem hanging from |
---|
193 | the tag. But if you wanted to remember any particular snapshot |
---|
194 | (perhaps if it's the snapshot you take before a big upgrade or other |
---|
195 | risky operation), you can duplicate the tag, in effect 'forking' the |
---|
196 | chain of snapshots much like a branch in a version control system. |
---|
197 | |
---|
198 | # Using Ugarit |
---|
199 | |
---|
200 | ## Installation |
---|
201 | |
---|
202 | Install [Chicken Scheme](http://www.call-with-current-continuation.org/) using their [installation instructions](http://chicken.wiki.br/Getting%20started#Installing%20Chicken). |
---|
203 | |
---|
204 | Ugarit can then be installed by typing (as root): |
---|
205 | |
---|
206 | chicken-install ugarit |
---|
207 | |
---|
208 | See the [chicken-install manual](http://wiki.call-cc.org/manual/Extensions#chicken-install-reference) for details if you have any trouble, or wish to install into your home directory. |
---|
209 | |
---|
210 | ## Setting up an archive |
---|
211 | |
---|
212 | Firstly, you need to know the archive identifier for the place you'll |
---|
213 | be storing your archives. This depends on your backend. The archive |
---|
214 | identifier is actually the command line used to invoke the backend for |
---|
215 | a particular archive; communication with the archive is via standard |
---|
216 | input and output, which is how it's easy to tunnel via ssh. |
---|
217 | |
---|
218 | ### Local filesystem backends |
---|
219 | |
---|
220 | These backends use the local filesystem to store the archives. Of |
---|
221 | course, the "local filesystem" on a given server might be an NFS mount |
---|
222 | or mounted from a storage-area network. |
---|
223 | |
---|
224 | #### Logfile backend |
---|
225 | |
---|
226 | The logfile backend works much like the original Venti system. It's |
---|
227 | append-only - you won't be able to delete old snapshots from a logfile |
---|
228 | archive, even when I implement deletion. It stores the archive in two |
---|
229 | sets of files; one is a log of data blocks, split at a specified |
---|
230 | maximum size, and the other is the metadata: an sqlite database used |
---|
231 | to track the location of blocks in the log files, the contents of |
---|
232 | tags, and a count of the logs so a filename can be chosen for a new one. |
---|
233 | |
---|
234 | To set up a new logfile archive, just choose where to put the two |
---|
235 | parts. It would be nice to put the metadata file on a different |
---|
236 | physical disk to the logs directory, to reduce seeking. If you only |
---|
237 | have one disk, you can put the metadata file in the log directory |
---|
238 | ("metadata" is a good name). |
---|
239 | |
---|
240 | You can then refer to it using the following archive identifier: |
---|
241 | |
---|
242 | "backend-fs splitlog ...log directory... ...metadata file... max-logfile-size" |
---|
243 | |
---|
244 | For most platforms, a max-logfile-size of 900000000 (900 MB) should |
---|
245 | suffice. For now, don't go much bigger than that on 32-bit systems |
---|
246 | until Chicken's `file-position` function is fixed to work with files |
---|
247 | more than 1GB in size. |
---|
248 | |
---|
249 | #### Filesystem backend |
---|
250 | |
---|
251 | The filesystem backend creates archives by storing each block or tag |
---|
252 | in its own file, in a directory. To keep the objects-per-directory |
---|
253 | count down, it'll split the files into subdirectories. Because of |
---|
254 | this, it uses a stupendous number of inodes (more than the filesystem |
---|
255 | being backed up). Only use it if you don't mind that; splitlog is much |
---|
256 | more efficient. |
---|
257 | |
---|
258 | To set up a new filesystem-backend archive, just create an empty |
---|
259 | directory that Ugarit will have write access to when it runs. It will |
---|
260 | probably run as root in order to be able to access the contents of |
---|
261 | files that aren't world-readable (although that's up to you), so be |
---|
262 | careful of NFS mounts that have `maproot=nobody` set! |
---|
263 | |
---|
264 | You can then refer to it using the following archive identifier: |
---|
265 | |
---|
266 | "backend-fs fs ...path to directory..." |
---|
267 | |
---|
268 | ### Proxying backends |
---|
269 | |
---|
270 | These backends wrap another archive identifier which the actual |
---|
271 | storage task is delegated to, but add some value along the way. |
---|
272 | |
---|
273 | ### SSH tunnelling |
---|
274 | |
---|
275 | It's easy to access an archive stored on a remote server. The caveat |
---|
276 | is that the backend then needs to be installed on the remote server! |
---|
277 | Since archives are accessed by running the supplied command, and then |
---|
278 | talking to them via stdin and stdout, the archive identified needs |
---|
279 | only be: |
---|
280 | |
---|
281 | "ssh ...hostname... '...remote archive identifier...'" |
---|
282 | |
---|
283 | ### Cache backend |
---|
284 | |
---|
285 | The cache backend is used to cache a list of what blocks exist in the |
---|
286 | proxied backend, so that it can answer queries as to the existance of |
---|
287 | a block rapidly, even when the proxied backend is on the end of a |
---|
288 | high-latency link (eg, the Internet). This should speed up snapshots, |
---|
289 | as existing files are identified by asking the backend if the archive |
---|
290 | already has them. |
---|
291 | |
---|
292 | The cache backend works by storing the cache in a local sqlite |
---|
293 | file. Given a place for it to store that file, usage is simple: |
---|
294 | |
---|
295 | "backend-cache ...path to cachefile... '...proxied archive identifier...'" |
---|
296 | |
---|
297 | The cache file will be automatically created if it doesn't already |
---|
298 | exist, so make sure there's write access to the containing directory. |
---|
299 | |
---|
300 | - WARNING - WARNING - WARNING - WARNING - WARNING - WARNING - |
---|
301 | |
---|
302 | If you use a cache on an archive shared between servers, make sure |
---|
303 | that you either: |
---|
304 | |
---|
305 | * Never delete things from the archive |
---|
306 | |
---|
307 | or |
---|
308 | |
---|
309 | * Make sure all access to the archive is via the same cache |
---|
310 | |
---|
311 | If a block is deleted from an archive, and a cache on that archive is |
---|
312 | not aware of the deletion (as it did not go "through" the caching |
---|
313 | proxy), then the cache will record that the block exists in the |
---|
314 | archive when it does not. This will mean that if a snapshot is made |
---|
315 | through the cache that would use that block, then it will be assumed |
---|
316 | that the block already exists in the archive when it does |
---|
317 | not. Therefore, the block will not be uploaded, and a dangling |
---|
318 | reference will result! |
---|
319 | |
---|
320 | Some setups which *are* safe: |
---|
321 | |
---|
322 | * A single server using an archive via a cache, not sharing it with |
---|
323 | anyone else. |
---|
324 | |
---|
325 | * A pool of servers using an archive via the same cache. |
---|
326 | |
---|
327 | * A pool of servers using an archive via one or more caches, and |
---|
328 | maybe some not via the cache, where nothing is ever deleted from |
---|
329 | the archive. |
---|
330 | |
---|
331 | * A pool of servers using an archive via one cache, and maybe some |
---|
332 | not via the cache, where deletions are only performed on servers |
---|
333 | using the cache, so the cache is always aware. |
---|
334 | |
---|
335 | ## Writing a ugarit.conf |
---|
336 | |
---|
337 | `ugarit.conf` should look something like this: |
---|
338 | |
---|
339 | (storage <archive identifier>) |
---|
340 | (hash tiger "<salt>") |
---|
341 | [double-check] |
---|
342 | [(compression [deflate|lzma])] |
---|
343 | [(encryption aes <key>)] |
---|
344 | [(file-cache "<path>")] |
---|
345 | [(rule ...)] |
---|
346 | |
---|
347 | The hash line chooses a hash algorithm. Currently Tiger-192 (`tiger`), |
---|
348 | SHA-256 (`sha256`), SHA-384 (`sha384`) and SHA-512 (`sha512`) are |
---|
349 | supported; if you omit the line then Tiger will still be used, but it |
---|
350 | will be a simple hash of the block with the block type appended, which |
---|
351 | reveals to attackers what blocks you have (as the hash is of the |
---|
352 | unencrypted block, and the hash is not encrypted). This is useful for |
---|
353 | development and testing or for use with trusted archives, but not |
---|
354 | advised for use with archives that attackers may snoop at. Providing a |
---|
355 | salt string produces a hash function that hashes the block, the type |
---|
356 | of block, and the salt string, producing hashes that attackers who can |
---|
357 | snoop the archive cannot use to find known blocks (see the "Security |
---|
358 | model" section below for more details). |
---|
359 | |
---|
360 | I would recommend that you create a salt string from a secure entropy |
---|
361 | source, such as: |
---|
362 | |
---|
363 | dd if=/dev/random bs=1 count=64 | base64 -w 0 |
---|
364 | |
---|
365 | Whichever hash function you use, you will need to install the required |
---|
366 | Chicken egg with one of the following commands: |
---|
367 | |
---|
368 | chicken-install -s tiger-hash # for tiger |
---|
369 | chicken-install -s sha2 # for the SHA hashes |
---|
370 | |
---|
371 | `double-check`, if present, causes Ugarit to perform extra internal |
---|
372 | consistency checks during backups, which will detect bugs but may slow |
---|
373 | things down. |
---|
374 | |
---|
375 | `lzma` is the recommended compression option for low-bandwidth |
---|
376 | backends or when space is tight, but it's very slow to compress; |
---|
377 | deflate or no compression at all are better for fast local |
---|
378 | archives. To have no compression at all, just remove the `(compression |
---|
379 | ...)` line entirely. Likewise, to use compression, you need to install |
---|
380 | a Chicken egg: |
---|
381 | |
---|
382 | chicken-install -s z3 # for deflate |
---|
383 | chicken-install -s lzma # for lzma |
---|
384 | |
---|
385 | Likewise, the `(encryption ...)` line may be omitted to have no |
---|
386 | encryption; the only currently supported algorithm is aes (in CBC |
---|
387 | mode) with a key given in hex, as a passphrase (hashed to get a key), |
---|
388 | or a passphrase read from the terminal on every run. The key may be |
---|
389 | 16, 24, or 32 bytes for 128-bit, 192-bit or 256-bit AES. To specify a |
---|
390 | hex key, just supply it as a string, like so: |
---|
391 | |
---|
392 | (encryption aes "00112233445566778899AABBCCDDEEFF") |
---|
393 | |
---|
394 | ...for 128-bit AES, |
---|
395 | |
---|
396 | (encryption aes "00112233445566778899AABBCCDDEEFF0011223344556677") |
---|
397 | |
---|
398 | ...for 192-bit AES, or |
---|
399 | |
---|
400 | (encryption aes "00112233445566778899AABBCCDDEEFF00112233445566778899AABBCCDDEEFF") |
---|
401 | |
---|
402 | ...for 256-bit AES. |
---|
403 | |
---|
404 | Alternatively, you can provide a passphrase, and specify how large a |
---|
405 | key you want it turned into, like so: |
---|
406 | |
---|
407 | (encryption aes ([16|24|32] "We three kings of Orient are, one in a taxi one in a car, one on a scooter honking his hooter and smoking a fat cigar. Oh, star of wonder, star of light; star with royal dynamite")) |
---|
408 | |
---|
409 | I would recommend that you generate a long passphrase from a secure |
---|
410 | entropy source, such as: |
---|
411 | |
---|
412 | dd if=/dev/random bs=1 count=64 | base64 -w 0 |
---|
413 | |
---|
414 | Finally, the extra-paranoid can request that Ugarit prompt for a |
---|
415 | passphrase on every run and hash it into a key of the specified |
---|
416 | length, like so: |
---|
417 | |
---|
418 | (encryption aes ([16|24|32] prompt)) |
---|
419 | |
---|
420 | (note the lack of quotes around `prompt`, distinguishing it from a passphrase) |
---|
421 | |
---|
422 | Please read the "Security model" section below for details on the |
---|
423 | implications of different encryption setups. |
---|
424 | |
---|
425 | Again, as it is an optional feature, to use encryption, you must |
---|
426 | install the appropriate Chicken egg: |
---|
427 | |
---|
428 | chicken-install -s aes |
---|
429 | |
---|
430 | A file cache, if enabled, significantly speeds up subsequent snapshots |
---|
431 | of a filesystem tree. The file cache is a file (which Ugarit will |
---|
432 | create if it doesn't already exist) mapping filenames to |
---|
433 | (mtime,size,hash) tuples; as it scans the filesystem, if it finds a |
---|
434 | file in the cache and the mtime and size have not changed, it will |
---|
435 | assume it is already archived under the specified hash. This saves it |
---|
436 | from having to read the entire file to hash it and then check if the |
---|
437 | hash is present in the archive. In other words, if only a few files |
---|
438 | have changed since the last snapshot, then snapshotting a directory |
---|
439 | tree becomes an O(N) operation, where N is the number of files, rather |
---|
440 | than an O(M) operation, where M is the total size of files involved. |
---|
441 | |
---|
442 | For example: |
---|
443 | |
---|
444 | (storage "ssh ugarit@spiderman 'backend-fs splitlog /mnt/ugarit-data /mnt/ugarit-metadata/metadata 900000000'") |
---|
445 | (hash tiger "i3HO7JeLCSa6Wa55uqTRqp4jppUYbXoxme7YpcHPnuoA+11ez9iOIA6B6eBIhZ0MbdLvvFZZWnRgJAzY8K2JBQ") |
---|
446 | (encryption aes (32 "FN9m34J4bbD3vhPqh6+4BjjXDSPYpuyskJX73T1t60PP0rPdC3AxlrjVn4YDyaFSbx5WRAn4JBr7SBn2PLyxJw")) |
---|
447 | (compression lzma) |
---|
448 | (file-cache "/var/ugarit/cache") |
---|
449 | |
---|
450 | Be careful to put a set of parentheses around each configuration |
---|
451 | entry. White space isn't significant, so feel free to indent things |
---|
452 | and wrap them over lines if you want. |
---|
453 | |
---|
454 | Keep copies of this file safe - you'll need it to do extractions! |
---|
455 | Print a copy out and lock it in your fire safe! Ok, currently, you |
---|
456 | might be able to recreate it if you remember where you put the |
---|
457 | storage, but encryption keys and hash salts are harder to remember... |
---|
458 | |
---|
459 | ## Your first backup |
---|
460 | |
---|
461 | Think of a tag to identify the filesystem you're backing up. If it's |
---|
462 | `/home` on the server `gandalf`, you might call it `gandalf-home`. If |
---|
463 | it's the entire filesystem of the server `bilbo`, you might just call |
---|
464 | it `bilbo`. |
---|
465 | |
---|
466 | Then from your shell, run (as root): |
---|
467 | |
---|
468 | # ugarit snapshot <ugarit.conf> [-c] [-a] <tag> <path to root of filesystem> |
---|
469 | |
---|
470 | For example, if we have a `ugarit.conf` in the current directory: |
---|
471 | |
---|
472 | # ugarit snapshot ugarit.conf -c localhost-etc /etc |
---|
473 | |
---|
474 | Specify the `-c` flag if you want to store ctimes in the archive; |
---|
475 | since it's impossible to restore ctimes when extracting from an |
---|
476 | archive, doing this is useful only for informational purposes, so it's |
---|
477 | not done by default. Similarly, atimes aren't stored in the archive |
---|
478 | unless you specify `-a`, because otherwise, there will be a lot of |
---|
479 | directory blocks uploaded on every snapshot, as the atime of every |
---|
480 | file will have been changed by the previous snapshot - so with `-a` |
---|
481 | specified, on every snapshot, every directory in your filesystem will |
---|
482 | be uploaded! Ugarit will happily restore atimes if they are found in |
---|
483 | an archive; their storage is made optional simply because uploading |
---|
484 | them is costly and rarely useful. |
---|
485 | |
---|
486 | ## Exploring the archive |
---|
487 | |
---|
488 | Now you have a backup, you can explore the contents of the |
---|
489 | archive. This need not be done as root, as long as you can read |
---|
490 | `ugarit.conf`; however, if you want to extract files, run it as root |
---|
491 | so the uids and gids can be set. |
---|
492 | |
---|
493 | $ ugarit explore <ugarit.conf> |
---|
494 | |
---|
495 | This will put you into an interactive shell exploring a virtual |
---|
496 | filesystem. The root directory contains an entry for every tag; if you |
---|
497 | type `ls` you should see your tag listed, and within that tag, you'll |
---|
498 | find a list of snapshots, in descending date order, with a special |
---|
499 | entry `current` for the most recent snapshot. Within a snapshot, |
---|
500 | you'll find the root directory of your snapshot, and will be able to |
---|
501 | `cd` into subdirectories, and so on: |
---|
502 | |
---|
503 | > ls |
---|
504 | Test <tag> |
---|
505 | > cd Test |
---|
506 | /Test> ls |
---|
507 | 2009-01-24 10:28:16 <snapshot> |
---|
508 | 2009-01-24 10:28:16 <snapshot> |
---|
509 | current <snapshot> |
---|
510 | /Test> cd current |
---|
511 | /Test/current> ls |
---|
512 | README.txt <file> |
---|
513 | LICENCE.txt <symlink> |
---|
514 | subdir <dir> |
---|
515 | .svn <dir> |
---|
516 | FIFO <fifo> |
---|
517 | chardev <character-device> |
---|
518 | blockdev <block-device> |
---|
519 | /Test/current> ls -ll LICENCE.txt |
---|
520 | lrwxr-xr-x 1000 100 2009-01-15 03:02:49 LICENCE.txt -> subdir/LICENCE.txt |
---|
521 | target: subdir/LICENCE.txt |
---|
522 | ctime: 1231988569.0 |
---|
523 | |
---|
524 | As well as exploring around, you can also extract files or directories |
---|
525 | (or entire snapshots) by using the `get` command. Ugarit will do its |
---|
526 | best to restore the metadata of files, subject to the rights of the |
---|
527 | user you run it as. |
---|
528 | |
---|
529 | Type `help` to get help in the interactive shell. |
---|
530 | |
---|
531 | ## Duplicating tags |
---|
532 | |
---|
533 | As mentioned above, you can duplicate a tag, creating two tags that |
---|
534 | refer to the same snapshot and its history but that can then have |
---|
535 | their own subsequent history of snapshots applied to each |
---|
536 | independently, with the following command: |
---|
537 | |
---|
538 | $ ugarit fork <ugarit.conf> <existing tag> <new tag> |
---|
539 | |
---|
540 | ## Archive administration |
---|
541 | |
---|
542 | Each backend offers a number of administrative commands for |
---|
543 | administering archives. These are accessible via the |
---|
544 | `ugarit-archive-admin` command line interface. |
---|
545 | |
---|
546 | To use it, run it with the following command: |
---|
547 | |
---|
548 | $ ugarit-archive-admin '<archive identifier>' |
---|
549 | |
---|
550 | The available commands differ between backends, but all backends |
---|
551 | support the `info` and `help` commands, which give basic information |
---|
552 | about the archive, and list all available commands, respectively. Some |
---|
553 | offer a `stats` command that examines the archive state to give |
---|
554 | interesting statistics, but which may be a time-consuming operation. |
---|
555 | |
---|
556 | ### Administering `splitlog` archives |
---|
557 | |
---|
558 | The splitlog backend offers a wide selection of administrative |
---|
559 | commands. See the `help` command on a splitlog archive for |
---|
560 | details. The following facilities are available: |
---|
561 | |
---|
562 | * Configuring the block size of the archive (this will affect new |
---|
563 | blocks written to the archive, and leave existing blocks untouched, |
---|
564 | even if they are larger than the new block size) |
---|
565 | |
---|
566 | * Configuring the size at which a log file is finished and a new one |
---|
567 | started (likewise, existing log files will be untouched; this will |
---|
568 | only affect new log files) |
---|
569 | |
---|
570 | * Configuring the frequency of automatic synching of the archive |
---|
571 | state to disk. Lowering this harms performance when writing to the |
---|
572 | archive, but decreases the number of in-progress block writes that |
---|
573 | can fail in a crash. |
---|
574 | |
---|
575 | * Enable or disable write protection of the archive |
---|
576 | |
---|
577 | * Reindex the archive, rebuilding the block and tag state from the |
---|
578 | contents of the log. If the metadata file is damaged or lost, |
---|
579 | reindexing can rebuild it (although any configuration changes made |
---|
580 | via other admin commands will need manually repeating as they are |
---|
581 | not logged). |
---|
582 | |
---|
583 | ## `.ugarit` files |
---|
584 | |
---|
585 | By default, Ugarit will archive everything it finds in the filesystem |
---|
586 | tree you tell it to snapshot. However, this might not always be |
---|
587 | desired; so we provide the facility to override this with `.ugarit` |
---|
588 | files, or global rules in your `.conf` file. |
---|
589 | |
---|
590 | Note: The syntax of these files is provisional, as I want to |
---|
591 | experiment with usability, as the current syntax is ugly. So please |
---|
592 | don't be surprised if the format changes in incompatible ways in |
---|
593 | subsequent versions! |
---|
594 | |
---|
595 | In quick summary, if you want to ignore all files or directories |
---|
596 | matching a glob in the current directory and below, put the following |
---|
597 | in a `.ugarit` file in that directory: |
---|
598 | |
---|
599 | (* (glob "*~") exclude) |
---|
600 | |
---|
601 | You can write quite complex expressions as well as just globs. The |
---|
602 | full set of rules is: |
---|
603 | |
---|
604 | * `(glob "`*pattern*`")` matches files and directories whose names |
---|
605 | match the glob pattern |
---|
606 | |
---|
607 | * `(name "`*name*`")` matches files and directories with exactly that |
---|
608 | name (useful for files called `*`...) |
---|
609 | |
---|
610 | * `(modified-within ` *number* ` seconds)` matches files and |
---|
611 | directories modified within the given number of seconds |
---|
612 | |
---|
613 | * `(modified-within ` *number* ` minutes)` matches files and |
---|
614 | directories modified within the given number of minutes |
---|
615 | |
---|
616 | * `(modified-within ` *number* ` hours)` matches files and directories |
---|
617 | modified within the given number of hours |
---|
618 | |
---|
619 | * `(modified-within ` *number* ` days)` matches files and directories |
---|
620 | modified within the given number of days |
---|
621 | |
---|
622 | * `(not ` *rule*`)` matches files and directories that do not match |
---|
623 | the given rule |
---|
624 | |
---|
625 | * `(and ` *rule* *rule...*`)` matches files and directories that match |
---|
626 | all the given rules |
---|
627 | |
---|
628 | * `(or ` *rule* *rule...*`)` matches files and directories that match |
---|
629 | any of the given rules |
---|
630 | |
---|
631 | Also, you can override a previous exclusion with an explicit include |
---|
632 | in a lower-level directory: |
---|
633 | |
---|
634 | (* (glob "*~") include) |
---|
635 | |
---|
636 | You can bind rules to specific directories, rather than to "this |
---|
637 | directory and all beneath it", by specifying an absolute or relative |
---|
638 | path instead of the `*`: |
---|
639 | |
---|
640 | ("/etc" (name "passwd") exclude) |
---|
641 | |
---|
642 | If you use a relative path, it's taken relative to the directory of |
---|
643 | the `.ugarit` file. |
---|
644 | |
---|
645 | You can also put some rules in your `.conf` file, although relative |
---|
646 | paths are illegal there, by adding lines of this form to the file: |
---|
647 | |
---|
648 | (rule * (glob "*~") exclude) |
---|
649 | |
---|
650 | # Questions and Answers |
---|
651 | |
---|
652 | ## What happens if a snapshot is interrupted? |
---|
653 | |
---|
654 | Nothing! Whatever blocks have been uploaded will be uploaded, but the |
---|
655 | snapshot is only added to the tag once the entire filesystem has been |
---|
656 | snapshotted. So just start the snapshot again. Any files that have |
---|
657 | already be uploaded will then not need to be uploaded again, so the |
---|
658 | second snapshot should proceed quickly to the point where it failed |
---|
659 | before, and continue from there. |
---|
660 | |
---|
661 | Unless the archive ends up with a partially-uploaded corrupted block |
---|
662 | due to being interrupted during upload, you'll be fine. The filesystem |
---|
663 | backend has been written to avoid this by writing the block to a file |
---|
664 | with the wrong name, then renaming it to the correct name when it's |
---|
665 | entirely uploaded. |
---|
666 | |
---|
667 | Actually, there is *one* caveat: blocks that were uploaded, but never |
---|
668 | make it into a finished snapshot, will be marked as "referenced" but |
---|
669 | there's no snapshot to delete to un-reference them, so they'll never |
---|
670 | be removed when you delete snapshots. (Not that snapshot deletion is |
---|
671 | implemented yet, mind). If this becomes a problem for people, we could |
---|
672 | write a "garbage collect" tool that regenerates the reference counts |
---|
673 | in an archive, leading to unused blocks (with a zero refcount) being |
---|
674 | unlinked. |
---|
675 | |
---|
676 | ## Should I share a single large archive between all my filesystems? |
---|
677 | |
---|
678 | I think so. Using a single large archive means that blocks shared |
---|
679 | between servers - eg, software installed from packages and that sort |
---|
680 | of thing - will only ever need to be uploaded once, saving storage |
---|
681 | space and upload bandwidth. However, do not share an archive between |
---|
682 | servers that do not mutually trust each other, as they can all update |
---|
683 | the same tags, so can meddle with each other's snapshots - and read |
---|
684 | each other's snapshots. |
---|
685 | |
---|
686 | # Security model |
---|
687 | |
---|
688 | I have designed and implemented Ugarit to be able to handle cases |
---|
689 | where the actual archive storage is not entirely trusted. |
---|
690 | |
---|
691 | However, security involves tradeoffs, and Ugarit is configurable in |
---|
692 | ways that affect its resistance to different kinds of attacks. Here I |
---|
693 | will list different kinds of attack and explain how Ugarit can deal |
---|
694 | with them, and how you need to configure it to gain that |
---|
695 | protection. |
---|
696 | |
---|
697 | ## Archive snoopers |
---|
698 | |
---|
699 | This might be somebody who can intercept Ugarit's communication with |
---|
700 | the archive at any point, or who can read the archive itself at their |
---|
701 | leisure. |
---|
702 | |
---|
703 | Ugarit's splitlog backend creates files with "rw-------" permissions |
---|
704 | out of the box to try and prevent this. This is a pain for people who |
---|
705 | want to share archives between UIDs, but we can add a configuration |
---|
706 | option to override this if that becomes a problem. |
---|
707 | |
---|
708 | ### Reading your data |
---|
709 | |
---|
710 | If you enable encryption, then all the blocks sent to the archive are |
---|
711 | encrypted using a secret key stored in your Ugarit configuration |
---|
712 | file. As long as that configuration file is kept safe, and the AES |
---|
713 | algorithm is secure, then attackers who can snoop the archive cannot |
---|
714 | decode your data blocks. Enabling compression will also help, as the |
---|
715 | blocks are compressed before encrypting, which is thought to make |
---|
716 | cryptographic analysis harder. |
---|
717 | |
---|
718 | Recommendations: Use compression and encryption when there is a risk |
---|
719 | of archive snooping. Keep your Ugarit configuration file safe using |
---|
720 | UNIX file permissions (make it readable only by root), and maybe store |
---|
721 | it on a removable device that's only plugged in when |
---|
722 | required. Alternatively, use the "prompt" passphrase option, and be |
---|
723 | prompted for a passphrase every time you run Ugarit, so it isn't |
---|
724 | stored on disk anywhere. |
---|
725 | |
---|
726 | ### Looking for known hashes |
---|
727 | |
---|
728 | A block is identified by the hash of its content (before compression |
---|
729 | and encryption). If an attacker was trying to find people who own a |
---|
730 | particular file (perhaps a piece of subversive literature), they could |
---|
731 | search Ugarit archives for its hash. |
---|
732 | |
---|
733 | However, Ugarit has the option to "key" the hash with a "salt" stored |
---|
734 | in the Ugarit configuration file. This means that the hashes used are |
---|
735 | actually a hash of the block's contents *and* the salt you supply. If |
---|
736 | you do this with a random salt that you keep secret, then attackers |
---|
737 | can't check your archive for known content just by comparing the hashes. |
---|
738 | |
---|
739 | Recommendations: Provide a secret string to your hash function in your |
---|
740 | Ugarit configuration file. Keep the Ugarit configuration file safe, as |
---|
741 | per the advice in the previous point. |
---|
742 | |
---|
743 | ## Archive modifiers |
---|
744 | |
---|
745 | These folks can modify Ugarit's writes into the archive, its reads |
---|
746 | back from the archive, or can modify the archive itself at their leisure. |
---|
747 | |
---|
748 | Modifying an encrypted block without knowing the encryption key can at |
---|
749 | worst be a denial of service, corrupting the block in an unknown |
---|
750 | way. An attacker who knows the encryption key could replace a block |
---|
751 | with valid-seeming but incorrect content. In the worst case, this |
---|
752 | could exploit a bug in the decompression engine, causing a crash or |
---|
753 | even an exploit of the Ugarit process itself (thereby gaining the |
---|
754 | powers of a process inspector, as documented below). We can but hope |
---|
755 | that the decompression engine is robust. Exploits of the decryption |
---|
756 | engine, or other parts of Ugarit, are less likely due to the nature of |
---|
757 | the operations performed upon them. |
---|
758 | |
---|
759 | However, if a block is modified, then when Ugarit reads it back, the |
---|
760 | hash will no longer match the hash Ugarit requested, which will be |
---|
761 | detected and an error reported. The hash is checked after |
---|
762 | decryption and decompression, so this check does not protect us |
---|
763 | against exploits of the decompression engine. |
---|
764 | |
---|
765 | This protection is only afforded when the hash Ugarit asks for is not |
---|
766 | tampered with. Most hashes are obtained from within other blocks, |
---|
767 | which are therefore safe unless that block has been tampered with; the |
---|
768 | nature of the hash tree conveys the trust in the hashes up to the |
---|
769 | root. The root hashes are stored in the archive as "tags", which an |
---|
770 | archive modifier could alter at will. Therefore, the tags cannot be |
---|
771 | trusted if somebody might modify the archive. This is why Ugarit |
---|
772 | prints out the snapshot hash and the root directory hash after |
---|
773 | performing a snapshot, so you can record them securely outside of the |
---|
774 | archive. |
---|
775 | |
---|
776 | The most likely threat posed by archive modifiers is that they could |
---|
777 | simply corrupt or delete all of your archive, without needing to know |
---|
778 | any encryption keys. |
---|
779 | |
---|
780 | Recommendations: Secure your archives against modifiers, by whatever |
---|
781 | means possible. If archive modifiers are still a potential threat, |
---|
782 | write down a log of your root directory hashes from each snapshot, and keep |
---|
783 | it safe. When extracting your backups, use the `ls -ll` command in the |
---|
784 | interface to check the "contents" hash of your snapshots, and check |
---|
785 | they match the root directory hash you expect. |
---|
786 | |
---|
787 | ## Process inspectors |
---|
788 | |
---|
789 | These folks can attach debuggers or similar tools to running |
---|
790 | processes, such as Ugarit itself. |
---|
791 | |
---|
792 | Ugarit backend processes only see encrypted data, so people who can |
---|
793 | attach to that process gain the powers of archive snoopers and |
---|
794 | modifiers, and the same conditions apply. |
---|
795 | |
---|
796 | People who can attach to the Ugarit process itself, however, will see |
---|
797 | the original unencrypted content of your filesystem, and will have |
---|
798 | full access to the encryption keys and hashing keys stored in your |
---|
799 | Ugarit configuration. When Ugarit is running with sufficient |
---|
800 | permissions to restore backups, they will be able to intercept and |
---|
801 | modify the data as it comes out, and probably gain total write access |
---|
802 | to your entire filesystem in the process. |
---|
803 | |
---|
804 | Recommendations: Ensure that Ugarit does not run under the same user |
---|
805 | ID as untrusted software. In many cases it will need to run as root in |
---|
806 | order to gain unfettered access to read the filesystems it is backing |
---|
807 | up, or to restore the ownership of files. However, when all the files |
---|
808 | it backs up are world-readable, it could run as an untrusted user for |
---|
809 | backups, and where file ownership is trivially reconstructible, it can |
---|
810 | do restores as a limited user, too. |
---|
811 | |
---|
812 | ## Attackers in the source filesystem |
---|
813 | |
---|
814 | These folks create files that Ugarit will back up one day. By having |
---|
815 | write access to your filesystem, they already have some level of |
---|
816 | power, and standard Unix security practices such as storage quotas |
---|
817 | should be used to control them. They may be people with logins on your |
---|
818 | box, or more subtly, people who can cause servers to writes files; |
---|
819 | somebody who sends an email to your mailserver will probably cause |
---|
820 | that message to be written to queue files, as will people who can |
---|
821 | upload files via any means. |
---|
822 | |
---|
823 | Such attackers might use up your available storage by creating large |
---|
824 | files. This creates a problem in the actual filesystem, but that |
---|
825 | problem can be fixed by deleting the files. If those files get |
---|
826 | archived into Ugarit, then they are a part of that snapshot. If you |
---|
827 | are using a backend that supports deletion, then (when I implement |
---|
828 | snapshot deletion in the user interface) you could delete that entire |
---|
829 | snapshot to recover the wasted space, but that is a rather serious |
---|
830 | operation. |
---|
831 | |
---|
832 | More insidiously, such attackers might attempt to abuse a hash |
---|
833 | collision in order to fool the archive. If they have a way of creating |
---|
834 | a file that, for instance, has the same hash as your shadow password |
---|
835 | file, then Ugarit will think that it already has that file when it |
---|
836 | attempts to snapshot it, and store a reference to the existing |
---|
837 | file. If that snapshot is restored, then they will receive a copy of |
---|
838 | your shadow password file. Similarly, if they can predict a future |
---|
839 | hash of your shadow password file, and create a shadow password file |
---|
840 | of their own (perhaps one giving them a root account with a known |
---|
841 | password) with that hash, they can then wait for the real shadow |
---|
842 | password file to have that hash. If the system is later restored from |
---|
843 | that snapshot, then their chosen content will appear in the shadow |
---|
844 | password file. However, doing this requires a very fundamental break |
---|
845 | of the hash function being used. |
---|
846 | |
---|
847 | Recommendations: Think carefully about who has write access to your |
---|
848 | filesystems, directly or indirectly via a network service that stores |
---|
849 | received data to disk. Enforce quotas where appropriate, and consider |
---|
850 | not backing up "queue directories" where untrusted content might |
---|
851 | appear; migrate incoming content that passes acceptance tests to an |
---|
852 | area that is backed up. If necessary, the queue might be backed up to |
---|
853 | a non-snapshotting system, such as rsyncing to another server, so that |
---|
854 | any excessive files that appear in there are removed from the backup |
---|
855 | in due course, while still affording protection. |
---|
856 | |
---|
857 | # Future Directions |
---|
858 | |
---|
859 | Here's a list of planned developments, in approximate priority order: |
---|
860 | |
---|
861 | ## General |
---|
862 | |
---|
863 | * More checks with `double-check` mode activated. Perhaps read blocks |
---|
864 | back from the archive to check it matches the blocks sent, to detect |
---|
865 | hash collisions. Maybe have levels of double-check-ness. |
---|
866 | |
---|
867 | * Migrate the source repo to Fossil (when there's a |
---|
868 | kitten-technologies.co.uk migration to Fossil), and update the egg |
---|
869 | locations thingy. Migrate all these Future Directions items to |
---|
870 | actual tickets. |
---|
871 | |
---|
872 | * Profile the system. As of 1.0.1, having done the periodic SQLite |
---|
873 | commits improvement, Ugarit is doing around 250KiB/sec on my home |
---|
874 | fileserver, but using 87% CPU in the ugarit procesa and 25% in the |
---|
875 | backend-fs process, when dealing with large files (so full 1MiB |
---|
876 | blocks are being processed). This suggests that the main |
---|
877 | block-handling loop in `store-file!` is less than efficient; reading |
---|
878 | via `current-input-port` rather than using the POSIX egg `file-read` |
---|
879 | functions may be a mistake, and there is probably more copying afoot |
---|
880 | than we need. |
---|
881 | |
---|
882 | ## Backends |
---|
883 | |
---|
884 | * Carefully document backend API for other backend authors: in |
---|
885 | particular note behaviour in crash situations - we assume that after |
---|
886 | a succesful flush! all previous blocks are safe, but after a flush, |
---|
887 | if some blocks make it, then all previous blocks must have. Eg, |
---|
888 | writes are done in order and periodically auto-flushed, in |
---|
889 | effect. This invariant is required for the file-cache to be safe |
---|
890 | (see v1.0.2). |
---|
891 | |
---|
892 | * Lock the archive for writing in backend-splitlog, so that two |
---|
893 | snapshots to the same archive don't collide. Do we lock per `put!` |
---|
894 | to allow interleaving, or is that too inefficient? In which case, we |
---|
895 | need to hold a lock that persists for a while, and release it |
---|
896 | periodically to allow other writers to the same archive to have a |
---|
897 | chance. |
---|
898 | |
---|
899 | * Make backend-splitlog write the current log file offset as well as |
---|
900 | number into the metadata on each flush, and on startup, either |
---|
901 | truncate the file to that position (to remove anything written but |
---|
902 | not flushed to the metadata) or scan the log onwards from that point |
---|
903 | to find (complete) blocks that did not get flushed to the metadata. |
---|
904 | |
---|
905 | * Support for unlinking in backend-splitlog, by marking byte ranges as |
---|
906 | unused in the metadata (and by touching the headers in the log so we |
---|
907 | maintain the invariant that the metadata is a reconstructible cache) |
---|
908 | and removing the entries for the unlinked blocks, perhaps provide an |
---|
909 | option to attempt to re-use existing holes to put blocks in for |
---|
910 | online reuse, and provide an offline compaction operation. Keep |
---|
911 | stats in the index of how many byte ranges are unused, and how many |
---|
912 | bytes unused, in each file, and report them in the info admin |
---|
913 | interface, along with the option to compact any or all files. We'll |
---|
914 | need to store refcounts in the backend metadata (should we log |
---|
915 | reuses, then, so the metadata can always be reconstructed, or just |
---|
916 | set them to NULL on a reconstruct); when this is enabled on an |
---|
917 | existing archive with no refcounts, default them to NULL, and treat |
---|
918 | a NULL refcount as "infinity". |
---|
919 | |
---|
920 | * For people doing remote backups who want to not hog resources, write |
---|
921 | a proxy backend that throttles bandwidth usage. Make it record the |
---|
922 | time it last sent a request to the backend, and the number of bytes |
---|
923 | read and written; then when a new request comes in, delay it until |
---|
924 | at least the largest of (write bandwidth quota * bytes written) and |
---|
925 | (read bandwidth quota * bytes read) seconds has passed since the |
---|
926 | last request was sent. NOTE: Start the clock when SENDING, so the |
---|
927 | time spent handling the request is already counting towards |
---|
928 | bandwidth quotas, or it won't be fair. |
---|
929 | |
---|
930 | * Support for SFTP as a storage backend. Store one file per block, as |
---|
931 | per `backend-fs`, but remotely. See |
---|
932 | http://tools.ietf.org/html/draft-ietf-secsh-filexfer-13 for sftp |
---|
933 | protocol specs; popen an `ssh -p sftp` connection to the server then |
---|
934 | talk that simple binary protocol. Tada! Ideally make an sftp egg, |
---|
935 | then a "ugarit-backend-sftp" egg to keep the dependencies optional. |
---|
936 | |
---|
937 | * Support for S3 as a storage backend. There is now an S3 egg! Make an |
---|
938 | "ugarit-backend-s3" egg to keep the dependencies optional. |
---|
939 | |
---|
940 | * Support for replicated archives. This will involve a special storage |
---|
941 | backend that can wrap any number of other archives, each tagged with |
---|
942 | a trust percentage and read and write load weightings. Each block |
---|
943 | will be uploaded to enough archives to make the total trust be at |
---|
944 | least 100%, by randomly picking the archives weighted by their write |
---|
945 | load weighting. A read-only archive automatically gets its write |
---|
946 | load weighting set to zero, and a warning issued if it was |
---|
947 | configured otherwise. A local cache will be kept of which backends |
---|
948 | carry which blocks, and reads will be serviced by picking the |
---|
949 | archive that carries it and has the highest read load weighting. If |
---|
950 | that archive is unavailable or has lost the block, then they will be |
---|
951 | tried in read load order; and if none of them have it, an exhaustive |
---|
952 | search of all available archives will be performed before giving up, |
---|
953 | and the cache updated with the results if the block is found. In |
---|
954 | order to correctly handle archives that were unavailable during |
---|
955 | this, we might need to log an "unknown" for that block key / archive |
---|
956 | pair, rather than assuming the block is not there, and check it |
---|
957 | later. Users will be given an admin command to notify the backend of |
---|
958 | an archive going missing forever, which will cause it to be removed |
---|
959 | from the cache. Affected blocks should be examined and re-replicated |
---|
960 | if their replication count is now too low. Another command should be |
---|
961 | available to warn of impending deliberate removal, which will again |
---|
962 | remove the archive from the cluster and re-replicate, the difference |
---|
963 | being that the disappearing archive is usable for re-replicating |
---|
964 | FROM, so this is a safe operation for blocks that are only on that |
---|
965 | one archive. The individual physical archives that we put |
---|
966 | replication on top of won't be "valid" archives unless they are 100% |
---|
967 | replicated, as they'll contain references to blocks that are on |
---|
968 | other archives. It might be a good idea to mark them as such with a |
---|
969 | special tag to avoid people trying to restore directly from them; |
---|
970 | the frontend should complain if you attempt to directly use an |
---|
971 | archive with the special tag in place. A copy of the replication |
---|
972 | configuration could be stored under a special tag to mark this fact, |
---|
973 | and to enable easy finding of the proper replicated archive to work |
---|
974 | from. There should be a configurable option to snapshot the cache to |
---|
975 | the archives whenever the replicated archive is closed, too. The |
---|
976 | command line to the backend, "backend-replicated", should point to |
---|
977 | an sqlite file for the configuration and cache, and users should use |
---|
978 | admin commands to add/remove/modify archives in the cluster. |
---|
979 | |
---|
980 | ## Core |
---|
981 | |
---|
982 | * Add the option to append hash signatures to the post-encryption |
---|
983 | blocks in the archive, to protect against people who tamper with |
---|
984 | blocks in order to try and exploit vulnerabilities in the |
---|
985 | decompression or decryption code (and to more quickly detect |
---|
986 | tampering in the pipeline, to reduce the DoS effect of all that |
---|
987 | wasted decryption and decompression, potentially including things |
---|
988 | that decrypt to giant amounts of RAM). |
---|
989 | |
---|
990 | * More stats. Log bytes written AFTER compression and encryption in |
---|
991 | `archive-put!`. Log snapshot start and end times in the snapshot |
---|
992 | object. |
---|
993 | |
---|
994 | * Clarify what characters are legal in tag names sent to backends, and |
---|
995 | what are legal in human-supplied tag names, and check that |
---|
996 | human-supplied tag names match a regular expression. Leave space for |
---|
997 | system-only tag names for storing archive metadata; suggest making a |
---|
998 | hash sign illegal in tag names. |
---|
999 | |
---|
1000 | * Clarify what characters are legal in block keys. Ugarit will only |
---|
1001 | issue [a-zA-Z0-9] for normal blocks, but may use other characters |
---|
1002 | (hash?) for special metadata blocks; establish a contract of what |
---|
1003 | backends must support (a-z, A-Z, 0-9, hash?) |
---|
1004 | |
---|
1005 | * API documentation for the modules we export |
---|
1006 | |
---|
1007 | * Encrypt tags, with a hash inside to check it's decrypted |
---|
1008 | correctly. Add a special "#ugarit-archive-format" tag that records a |
---|
1009 | format version number, to note that this change has been |
---|
1010 | applied. Provide an upgrade tool. Don't do auto-upgrades, or |
---|
1011 | attackers will be able to drop in plaintext tags. |
---|
1012 | |
---|
1013 | * Store a test block in the archive that is used to check the same |
---|
1014 | encryption and hash settings are used for an archive, consistently |
---|
1015 | (changing compression setting is supported, but changing encryption |
---|
1016 | or hash will lead to confusion). Encrypt the hash of the passphrase |
---|
1017 | and store it in the test block, which should have a name that cannot |
---|
1018 | clash with any actual hash (eg, use non-hex characters in its |
---|
1019 | name). When the block does not exist, create it; when it does exist, |
---|
1020 | check it against the current encryption and hashing settings to see |
---|
1021 | if it matches. When creating a new block, if the "prompt" passphrase |
---|
1022 | specification mechanism is in use, prompt again to confirm the |
---|
1023 | passphrase. If no encryption is in use, check the hash algorithm |
---|
1024 | doesn't change by storing the hash of a constant string, |
---|
1025 | unencrypted. To make brute-forcing the passphrase or hash-salt |
---|
1026 | harder, consider applying the hash a large number of times, to |
---|
1027 | increase the compute cost of checking it. Thanks to Andy Bennett for |
---|
1028 | this idea. |
---|
1029 | |
---|
1030 | * More `.ugarit` actions. Right now we just have exclude and include; |
---|
1031 | we might specify less-safe operations such as commands to run before |
---|
1032 | and after snapshotting certain subtrees, or filters (don't send this |
---|
1033 | SVN repository; instead send the output of `svnadmin dump`), |
---|
1034 | etc. Running arbitrary commands is a security risk if random users |
---|
1035 | write their own `.ugarit` files - so we'd need some trust-based |
---|
1036 | mechanism; they'd need to be explicitly enabled in `ugarit.conf`, |
---|
1037 | then a `.ugarit` option could disable all unsafe operations in a |
---|
1038 | subtree. |
---|
1039 | |
---|
1040 | * `.ugarit` rules for file sizes. In particular, a rule to exclude |
---|
1041 | files above a certain size. Thanks to Andy Bennett for this idea. |
---|
1042 | |
---|
1043 | * Support for FFS flags, Mac OS X extended filesystem attributes, NTFS |
---|
1044 | ACLs/streams, FAT attributes, etc... Ben says to look at Box Backup |
---|
1045 | for some code to do that sort of thing. |
---|
1046 | |
---|
1047 | * Deletion support - letting you remove snapshots. Perhaps you might |
---|
1048 | want to remove all snapshots older than a given number of days on a |
---|
1049 | given tag. Or just remove X out of Y snapshots older than a given |
---|
1050 | number of days on a given tag. We have the core support for this; |
---|
1051 | just find a snapshot and `unlink-directory!` its contents, leaving a |
---|
1052 | dangling pointer from the snapshot, and write the snapshot handling |
---|
1053 | code to expect this. Again, check Box Backup for that. |
---|
1054 | |
---|
1055 | * Option, when backing up, to not cross mountpoints |
---|
1056 | |
---|
1057 | * Option, when backing up, to store inode number and mountpoint path |
---|
1058 | in directory entries, and then when extracting, keeping a dictionary |
---|
1059 | of this unique identifier to pathname, so that if a file to be |
---|
1060 | extracted is already in the dictionary and the hash is the same, a |
---|
1061 | hardlink can be created. |
---|
1062 | |
---|
1063 | * Archival mode as well as snapshot mode. Whereas a snapshot record |
---|
1064 | takes a filesystem tree and adds it to a chain of snapshots of the |
---|
1065 | same filesystem tree, archival mode takes a filesystem tree and |
---|
1066 | inserts it into a search tree anchored on the specified tag, |
---|
1067 | indexing it on a list of key+value properties supplied at archival |
---|
1068 | time. An archive tag is represented in the virtual filesystem as a |
---|
1069 | directory full of archive objects, each identified by their full |
---|
1070 | hash; each archive object references the filesystem root as well as |
---|
1071 | the key+value properties, and optionally a parent link like a |
---|
1072 | snapshot, as an archive can be made that explicitly replaces an |
---|
1073 | earlier one and should replace it in the index; there is also a |
---|
1074 | virtual directory for each indexed property which contains a |
---|
1075 | directory for each value of the property, full of symlinks to the |
---|
1076 | archive objects, and subdirectories that allow multi-property |
---|
1077 | searches on other properties. The index itself is stored as a B-Tree |
---|
1078 | with a reasonably small block size; when it's updated, the modified |
---|
1079 | index blocks are replaced, thereby gaining new hashes, so their |
---|
1080 | parents need replacing, all the way up the tree until a new root |
---|
1081 | block is created. The existing block unlink mechanism in the |
---|
1082 | backends will reclaim storage for blocks that are superceded, if the |
---|
1083 | backend supports it. When this is done, ugarit will offer the option |
---|
1084 | of snapshotting to a snapshot tag, or archiving to an archive tag, |
---|
1085 | or archiving to an archive tag while replacing a specified archive |
---|
1086 | object (nominated by path within the tag), which causes it to be |
---|
1087 | removed from the index (except from the directory listing all |
---|
1088 | archives by hash), and the new archive object is inserted, |
---|
1089 | referencing the old one as a parent. |
---|
1090 | |
---|
1091 | * Dump/restore format. On a dump, walk an arbitrary subtree of an |
---|
1092 | archive, serialising objects. Do not put any hashes in the dump |
---|
1093 | format - dump out entire files, and just identify objects with |
---|
1094 | sequential numbers when forming the directory / snapshot trees. On a |
---|
1095 | restore, read the same format and slide it into an archive (creating |
---|
1096 | any required top-level snapshot objects if the dump doesn't start |
---|
1097 | from a snapshot) and putting it onto a specified tag. The |
---|
1098 | intention is that this format can be used to migrate your stuff |
---|
1099 | between archives, perhaps to change to a better backend. |
---|
1100 | |
---|
1101 | * Optional progress reporting callback from within store-file! and |
---|
1102 | store-directory!, called on each block within a file or on each |
---|
1103 | filesystem object, respectively. |
---|
1104 | |
---|
1105 | * Add a procedure to resolve a path within the archive node tree from |
---|
1106 | any root node. Pass in the path as a list of strings, with the |
---|
1107 | symbols `.` and `..` being usable as meta-characters to do nothing |
---|
1108 | or to go up a level. Write a utility procedure to parse a string |
---|
1109 | into such a form. Make it recognise and follow symlinks. |
---|
1110 | |
---|
1111 | * When symlinks are traversed by the path resolver and by the explore |
---|
1112 | CLI, make `<tag>/current` be a symlink to the timestamp of the |
---|
1113 | current snapshot rather than a clone of it, for neatness. |
---|
1114 | |
---|
1115 | ## Front-end |
---|
1116 | |
---|
1117 | * Install progress reporting callbacks to report progress to user; |
---|
1118 | option for quiet (no reporting), normal (reporting if >60s have |
---|
1119 | passed since last time), or verbose (report every file), or very |
---|
1120 | verbose (report every file and block). |
---|
1121 | |
---|
1122 | * Make the explore CLI let you cd into symlinks |
---|
1123 | |
---|
1124 | * Add a command to force removing a tag lock. |
---|
1125 | |
---|
1126 | * Add a command to list all the tags (with a * next to locked tags) |
---|
1127 | |
---|
1128 | * Add a command to list the contents of any directory in the archive |
---|
1129 | node tree |
---|
1130 | |
---|
1131 | * API mode: Works something like the backend API, except at the |
---|
1132 | archive level. Supports all the important archive operations, plus |
---|
1133 | access to sexpr stream writers and key stream writers, |
---|
1134 | archive-node-fold, etc. Requested by andyjpb, perhaps I can write |
---|
1135 | the framework for this and then let him add API functions as he desires. |
---|
1136 | |
---|
1137 | * Command-line support to extract the contents of a given path in the |
---|
1138 | archive, rather than needing to use explore mode. Also the option to |
---|
1139 | extract given just a block key (useful when reading from keys logged |
---|
1140 | manually at snapshot time). |
---|
1141 | |
---|
1142 | * FUSE/9p support. Mount it as a read-only filesystem :-D Then |
---|
1143 | consider adding Fossil-style writing to the `current` of a snapshot, |
---|
1144 | with copy-on-write of blocks to a buffer area on the local disk, |
---|
1145 | then the option to make a snapshot of `current`. Put these into |
---|
1146 | separate "ugarit-frontend-9p" and "ugarit-frontend-fuse" eggs, to |
---|
1147 | control the dependencies. |
---|
1148 | |
---|
1149 | * Filesystem watching. Even with the hash-caching trick, a snapshot |
---|
1150 | will still involve walking the entire directory tree and looking up |
---|
1151 | every file in the hash cache. We can do better than that - some |
---|
1152 | platforms provide an interface for receiving real-time notifications |
---|
1153 | of changed or added files. Using this, we could allow ugarit to run |
---|
1154 | in continuous mode, keeping a log of file notifications from the OS |
---|
1155 | while it does an initial full snapshot. It can then wait for a |
---|
1156 | specified period (one hour, perhaps?), accumulating names of files |
---|
1157 | changed since it started, before then creating a new snapshot by |
---|
1158 | uploading just the files it knows to have changed, while subsequent |
---|
1159 | file change notifications go to a new list. |
---|
1160 | |
---|
1161 | ## Testing |
---|
1162 | |
---|
1163 | * An option to verify a snapshot, walking every block in it checking |
---|
1164 | there's no dangling references, and that everything matches its |
---|
1165 | hash, without needing to put it into a filesystem, and applying any |
---|
1166 | other sanity checks we can think of en route. Optionally compare it |
---|
1167 | to an on-disk filesystem, while we're at it. |
---|
1168 | |
---|
1169 | * A unit test script around the `ugarit` command-line tool; the corpus |
---|
1170 | should contain a mix of tiny and huge files and directories, awkward |
---|
1171 | cases for sharing of blocks (many identical files in the same dir, |
---|
1172 | etc), complex forms of file metadata, and so on. It should archive |
---|
1173 | and restore the corpus several times over with each hash, |
---|
1174 | compression, and encryption option. |
---|
1175 | |
---|
1176 | * Testing crashes. See about writing a test backend binary that either |
---|
1177 | raises an error or just kills the process directly after N |
---|
1178 | operations, and sit in a loop running it with increasing N. Take N |
---|
1179 | from an environment variable to make it easier to automate this. |
---|
1180 | |
---|
1181 | * Extract the debugging backend from backend-devtools into a proper |
---|
1182 | backend binary that takes a path to a log file and a backend command |
---|
1183 | line to wrap. |
---|
1184 | |
---|
1185 | * Invoke the archive unit tests with every compression and encryption |
---|
1186 | option, and different hashing algorithms with and without keys |
---|
1187 | |
---|
1188 | # Acknowledgements |
---|
1189 | |
---|
1190 | The original idea came from Venti, a content-addressed storage system |
---|
1191 | from Plan 9. Venti is usable directly by user applications, and is |
---|
1192 | also integrated with the Fossil filesystem to support snapshotting the |
---|
1193 | status of a Fossil filesystem. Fossil allows references to either be |
---|
1194 | to a block number on the Fossil partition or to a Venti key; so when a |
---|
1195 | filesystem has been snapshotted, all it now contains is a "root |
---|
1196 | directory" pointer into the Venti archive, and any files modified |
---|
1197 | therafter are copied-on-write into Fossil where they may be modified |
---|
1198 | until the next snapshot. |
---|
1199 | |
---|
1200 | We're nowhere near that exciting yet, but using FUSE, we might be able |
---|
1201 | to do something similar, which might be fun. However, Venti inspired |
---|
1202 | me when I read about it years ago; it showed me how elegant |
---|
1203 | content-addressed storage is. Finding out that the Git version control |
---|
1204 | system used the same basic tricks really just confirmed this for me. |
---|
1205 | |
---|
1206 | Also, I'd like to tip my hat to Duplicity. With the changing economics |
---|
1207 | of storage presented by services like Amazon S3 and rsync.net, I |
---|
1208 | looked to Duplicity as it provided both SFTP and S3 backends. However, |
---|
1209 | it worked in terms of full and incremental backups, a model that I |
---|
1210 | think made sense for magnetic tapes, but loses out to |
---|
1211 | content-addressed snapshots when you have random-access |
---|
1212 | media. Duplicity inspired me by its adoption of multiple backends, the |
---|
1213 | very backends I want to use, but I still hungered for a |
---|
1214 | content-addressed snapshot store. |
---|
1215 | |
---|
1216 | I'd also like to tip my hat to Box Backup. I've only used it a little, |
---|
1217 | because it requires a special server to manage the storage (and I want |
---|
1218 | to get my backups *off* of my servers), but it also inspires me with |
---|
1219 | directions I'd like to take Ugarit. It's much more aware of real-time |
---|
1220 | access to random-access storage than Duplicity, and has a very |
---|
1221 | interesting continuous background incremental backup mode, moving away |
---|
1222 | from the tape-based paradigm of backups as something you do on a |
---|
1223 | special day of the week, like some kind of religious observance. I |
---|
1224 | hope the author Ben, who is a good friend of mine, won't mind me |
---|
1225 | plundering his source code for details on how to request real-time |
---|
1226 | notification of changes from the filesystem, and how to read and write |
---|
1227 | extended attributes! |
---|
1228 | |
---|
1229 | Moving on from the world of backup, I'd like to thank the Chicken Team |
---|
1230 | for producing Chicken Scheme. Felix and the community at #chicken on |
---|
1231 | Freenode have particularly inspired me with their can-do attitudes to |
---|
1232 | combining programming-language elegance and pragmatic engineering - |
---|
1233 | two things many would think un-unitable enemies. Of course, they |
---|
1234 | didn't do it all themselves - R5RS Scheme and the SRFIs provided a |
---|
1235 | solid foundation to build on, and there's a cast of many more in the |
---|
1236 | Chicken community, working on other bits of Chicken or just egging |
---|
1237 | everyone on. And I can't not thank Henry Baker for writing the seminal |
---|
1238 | paper on the technique Chicken uses to implement full tail-calling |
---|
1239 | Scheme with cheap continuations on top of C; Henry already had my |
---|
1240 | admiration for his work on combining elegance and pragmatism in linear |
---|
1241 | logic. Why doesn't he return my calls? I even sent flowers. |
---|
1242 | |
---|
1243 | A special thanks should go to Christian Kellermann for porting Ugarit |
---|
1244 | to use Chicken 4 modules, too, which was otherwise a big bottleneck to |
---|
1245 | development, as I was stuck on Chicken 3 for some time! And to Andy |
---|
1246 | Bennett for many insightful conversations about future directions. |
---|
1247 | |
---|
1248 | Thanks to the early adopters who brought me useful feedback, too! |
---|
1249 | |
---|
1250 | And I'd like to thank my wife for putting up with me spending several |
---|
1251 | evenings and weekends and holiday days working on this thing... |
---|
1252 | |
---|
1253 | # Version history |
---|
1254 | |
---|
1255 | * 1.0.2: Made the file cache also commit periodically, rather than on |
---|
1256 | every write, in order to improve performance. Counting blocks and |
---|
1257 | bytes uploaded / reused, and file cache bytes as well as hits; |
---|
1258 | reporting same in snapshot UI and logging same to snapshot |
---|
1259 | metadata. Switched to the `posix-extras` egg and ditched our own |
---|
1260 | `posixextras.scm` wrappers. Used the `parley` egg in the `ugarit |
---|
1261 | explore` CLI for line editing. Added logging infrastructure, |
---|
1262 | recording of snapshot logs in the snapshot. Added recovery from |
---|
1263 | extraction errors. Listed lock state of tags in explore |
---|
1264 | mode. Backend protocol v2 introduced (retaining v1 for |
---|
1265 | compatability) allowing for an error on backend startup, and logging |
---|
1266 | nonfatal errors, warnings, and info on startup and all protocol |
---|
1267 | calls. Added `ugarit-archive-admin` command line interface to |
---|
1268 | backend-specific administrative interfaces. Configuration of the |
---|
1269 | splitlog backend (write protection, adjusting block size and logfile |
---|
1270 | size limit and commit interval) is now possible via the admin |
---|
1271 | interface. The admin interface also permits rebuilding the metadata |
---|
1272 | index of a splitlog archive with the `reindex!` admin command. |
---|
1273 | |
---|
1274 | * BUGFIX: Made file cache check the file hashes it finds in the |
---|
1275 | cache actually exist in the archive, to protect against the case |
---|
1276 | where a crash of some kind has caused unflushed changes to be |
---|
1277 | lost; the file cache may well have committed changes that the |
---|
1278 | backend hasn't, leading to references to nonexistant blocks. Note |
---|
1279 | that we assume that archives are sequentially safe, eg if the |
---|
1280 | final indirect block of a large file made it, all the partial |
---|
1281 | blocks must have made it too. |
---|
1282 | |
---|
1283 | * BUGFIX: Added an explicit `flush!` command to the backend |
---|
1284 | protocol, and put explicit flushes at critical points in higher |
---|
1285 | layers (`backend-cache`, the archive abstraction in the Ugarit |
---|
1286 | core, and when tagging a snapshot) so that we ensure the blocks we |
---|
1287 | point at are flushed before committing references to them in the |
---|
1288 | `backend-cache` or file caches, or into tags, to ensure crash |
---|
1289 | safety. |
---|
1290 | |
---|
1291 | * BUGFIX: Made the splitlog backend never exceed the file size limit |
---|
1292 | (except when passed blocks that, plus a header, are larger than |
---|
1293 | it), rather than letting a partial block hang over the 'end'. |
---|
1294 | |
---|
1295 | * BUGFIX: Fixed tag locking, which was broken all over the |
---|
1296 | place. Concurrent snapshots to the same tag should now block for |
---|
1297 | one another, although why you'd want to *do* that is questionable. |
---|
1298 | |
---|
1299 | * BUGFIX: Fixed generation of non-keyed hashes, which was |
---|
1300 | incorrectly appending the type to the hash without an outer |
---|
1301 | hash. This breaks backwards compatability, but nobody was using |
---|
1302 | the old algorithm, right? I'll introduce it as an option if |
---|
1303 | required. |
---|
1304 | |
---|
1305 | * 1.0.1: Consistency check on read blocks by default. Removed warning |
---|
1306 | about deletions from backend-cache; we need a new mechanism to |
---|
1307 | report warnings from backends to the user. Made backend-cache and |
---|
1308 | backend-fs/splitlog commit periodically rather than after every |
---|
1309 | insert, which should speed up snapshotting a lot, and reused the |
---|
1310 | prepared statements rather than re-preparing them all the |
---|
1311 | time. BUGFIX: splitlog backend now creates log files with |
---|
1312 | "rw-------" rather than "rwx------" permissions; and all sqlite |
---|
1313 | databases (splitlog metadata, cache file, and file-cache file) are |
---|
1314 | created with "rw-------" rather then "rw-r--r--". |
---|
1315 | |
---|
1316 | * 1.0: Migrated from gdbm to sqlite for metadata storage, removing the |
---|
1317 | GPL taint. Unit test suite. backend-cache made into a separate |
---|
1318 | backend binary. Removed backend-log. BUGFIX: file caching uses mtime *and* |
---|
1319 | size now, rather than just mtime. Error handling so we skip objects |
---|
1320 | that we cannot do something with, and proceed to try the rest of the |
---|
1321 | operation. |
---|
1322 | |
---|
1323 | * 0.8: decoupling backends from the core and into separate binaries, |
---|
1324 | accessed via standard input and output, so they can be run over SSH |
---|
1325 | tunnels and other such magic. |
---|
1326 | |
---|
1327 | * 0.7: file cache support, sorting of directories so they're archived |
---|
1328 | in canonical order, autoloading of hash/encryption/compression |
---|
1329 | modules so they're not required dependencies any more. |
---|
1330 | |
---|
1331 | * 0.6: .ugarit support. |
---|
1332 | |
---|
1333 | * 0.5: Keyed hashing so attackers can't tell what blocks you have, |
---|
1334 | markers in logs so the index can be reconstructed, sha2 support, and |
---|
1335 | passphrase support. |
---|
1336 | |
---|
1337 | * 0.4: AES encryption. |
---|
1338 | |
---|
1339 | * 0.3: Added splitlog backend, and fixed a .meta file typo. |
---|
1340 | |
---|
1341 | * 0.2: Initial public release. |
---|
1342 | |
---|
1343 | * 0.1: Internal development release. |
---|