5 * SOME HIGH LEVEL CODE DOCUMENTATION:
7 * Bcache mostly works with cache sets, cache devices, and backing devices.
9 * Support for multiple cache devices hasn't quite been finished off yet, but
10 * it's about 95% plumbed through. A cache set and its cache devices is sort of
11 * like a md raid array and its component devices. Most of the code doesn't care
12 * about individual cache devices, the main abstraction is the cache set.
14 * Multiple cache devices is intended to give us the ability to mirror dirty
15 * cached data and metadata, without mirroring clean cached data.
17 * Backing devices are different, in that they have a lifetime independent of a
18 * cache set. When you register a newly formatted backing device it'll come up
19 * in passthrough mode, and then you can attach and detach a backing device from
20 * a cache set at runtime - while it's mounted and in use. Detaching implicitly
21 * invalidates any cached data for that backing device.
23 * A cache set can have multiple (many) backing devices attached to it.
25 * There's also flash only volumes - this is the reason for the distinction
26 * between struct cached_dev and struct bcache_device. A flash only volume
27 * works much like a bcache device that has a backing device, except the
28 * "cached" data is always dirty. The end result is that we get thin
29 * provisioning with very little additional code.
31 * Flash only volumes work but they're not production ready because the moving
32 * garbage collector needs more work. More on that later.
36 * Bcache is primarily designed for caching, which means that in normal
37 * operation all of our available space will be allocated. Thus, we need an
38 * efficient way of deleting things from the cache so we can write new things to
41 * To do this, we first divide the cache device up into buckets. A bucket is the
42 * unit of allocation; they're typically around 1 mb - anywhere from 128k to 2M+
45 * Each bucket has a 16 bit priority, and an 8 bit generation associated with
46 * it. The gens and priorities for all the buckets are stored contiguously and
47 * packed on disk (in a linked list of buckets - aside from the superblock, all
48 * of bcache's metadata is stored in buckets).
50 * The priority is used to implement an LRU. We reset a bucket's priority when
51 * we allocate it or on cache it, and every so often we decrement the priority
52 * of each bucket. It could be used to implement something more sophisticated,
53 * if anyone ever gets around to it.
55 * The generation is used for invalidating buckets. Each pointer also has an 8
56 * bit generation embedded in it; for a pointer to be considered valid, its gen
57 * must match the gen of the bucket it points into. Thus, to reuse a bucket all
58 * we have to do is increment its gen (and write its new gen to disk; we batch
61 * Bcache is entirely COW - we never write twice to a bucket, even buckets that
62 * contain metadata (including btree nodes).
66 * Bcache is in large part design around the btree.
68 * At a high level, the btree is just an index of key -> ptr tuples.
70 * Keys represent extents, and thus have a size field. Keys also have a variable
71 * number of pointers attached to them (potentially zero, which is handy for
72 * invalidating the cache).
74 * The key itself is an inode:offset pair. The inode number corresponds to a
75 * backing device or a flash only volume. The offset is the ending offset of the
76 * extent within the inode - not the starting offset; this makes lookups
77 * slightly more convenient.
79 * Pointers contain the cache device id, the offset on that device, and an 8 bit
80 * generation number. More on the gen later.
82 * Index lookups are not fully abstracted - cache lookups in particular are
83 * still somewhat mixed in with the btree code, but things are headed in that
86 * Updates are fairly well abstracted, though. There are two different ways of
87 * updating the btree; insert and replace.
89 * BTREE_INSERT will just take a list of keys and insert them into the btree -
90 * overwriting (possibly only partially) any extents they overlap with. This is
91 * used to update the index after a write.
93 * BTREE_REPLACE is really cmpxchg(); it inserts a key into the btree iff it is
94 * overwriting a key that matches another given key. This is used for inserting
95 * data into the cache after a cache miss, and for background writeback, and for
96 * the moving garbage collector.
98 * There is no "delete" operation; deleting things from the index is
99 * accomplished by either by invalidating pointers (by incrementing a bucket's
100 * gen) or by inserting a key with 0 pointers - which will overwrite anything
101 * previously present at that location in the index.
103 * This means that there are always stale/invalid keys in the btree. They're
104 * filtered out by the code that iterates through a btree node, and removed when
105 * a btree node is rewritten.
109 * Our unit of allocation is a bucket, and we we can't arbitrarily allocate and
110 * free smaller than a bucket - so, that's how big our btree nodes are.
112 * (If buckets are really big we'll only use part of the bucket for a btree node
113 * - no less than 1/4th - but a bucket still contains no more than a single
114 * btree node. I'd actually like to change this, but for now we rely on the
115 * bucket's gen for deleting btree nodes when we rewrite/split a node.)
117 * Anyways, btree nodes are big - big enough to be inefficient with a textbook
118 * btree implementation.
120 * The way this is solved is that btree nodes are internally log structured; we
121 * can append new keys to an existing btree node without rewriting it. This
122 * means each set of keys we write is sorted, but the node is not.
124 * We maintain this log structure in memory - keeping 1Mb of keys sorted would
125 * be expensive, and we have to distinguish between the keys we have written and
126 * the keys we haven't. So to do a lookup in a btree node, we have to search
127 * each sorted set. But we do merge written sets together lazily, so the cost of
128 * these extra searches is quite low (normally most of the keys in a btree node
129 * will be in one big set, and then there'll be one or two sets that are much
132 * This log structure makes bcache's btree more of a hybrid between a
133 * conventional btree and a compacting data structure, with some of the
134 * advantages of both.
136 * GARBAGE COLLECTION:
138 * We can't just invalidate any bucket - it might contain dirty data or
139 * metadata. If it once contained dirty data, other writes might overwrite it
140 * later, leaving no valid pointers into that bucket in the index.
142 * Thus, the primary purpose of garbage collection is to find buckets to reuse.
143 * It also counts how much valid data it each bucket currently contains, so that
144 * allocation can reuse buckets sooner when they've been mostly overwritten.
146 * It also does some things that are really internal to the btree
147 * implementation. If a btree node contains pointers that are stale by more than
148 * some threshold, it rewrites the btree node to avoid the bucket's generation
149 * wrapping around. It also merges adjacent btree nodes if they're empty enough.
153 * Bcache's journal is not necessary for consistency; we always strictly
154 * order metadata writes so that the btree and everything else is consistent on
155 * disk in the event of an unclean shutdown, and in fact bcache had writeback
156 * caching (with recovery from unclean shutdown) before journalling was
159 * Rather, the journal is purely a performance optimization; we can't complete a
160 * write until we've updated the index on disk, otherwise the cache would be
161 * inconsistent in the event of an unclean shutdown. This means that without the
162 * journal, on random write workloads we constantly have to update all the leaf
163 * nodes in the btree, and those writes will be mostly empty (appending at most
164 * a few keys each) - highly inefficient in terms of amount of metadata writes,
165 * and it puts more strain on the various btree resorting/compacting code.
167 * The journal is just a log of keys we've inserted; on startup we just reinsert
168 * all the keys in the open journal entries. That means that when we're updating
169 * a node in the btree, we can wait until a 4k block of keys fills up before
172 * For simplicity, we only journal updates to leaf nodes; updates to parent
173 * nodes are rare enough (since our leaf nodes are huge) that it wasn't worth
174 * the complexity to deal with journalling them (in particular, journal replay)
175 * - updates to non leaf nodes just happen synchronously (see btree_split()).
179 #define pr_fmt(fmt) "bcachefs: %s() " fmt "\n", __func__
181 #include <linux/bug.h>
182 #include <linux/bio.h>
183 #include <linux/closure.h>
184 #include <linux/kobject.h>
185 #include <linux/lglock.h>
186 #include <linux/list.h>
187 #include <linux/mutex.h>
188 #include <linux/percpu-refcount.h>
189 #include <linux/radix-tree.h>
190 #include <linux/rbtree.h>
191 #include <linux/rhashtable.h>
192 #include <linux/rwsem.h>
193 #include <linux/seqlock.h>
194 #include <linux/shrinker.h>
195 #include <linux/types.h>
196 #include <linux/workqueue.h>
198 #include "bcachefs_format.h"
204 #include <linux/dynamic_fault.h>
206 #define bch2_fs_init_fault(name) \
207 dynamic_fault("bcachefs:bch_fs_init:" name)
208 #define bch2_meta_read_fault(name) \
209 dynamic_fault("bcachefs:meta:read:" name)
210 #define bch2_meta_write_fault(name) \
211 dynamic_fault("bcachefs:meta:write:" name)
214 #define bch2_fmt(_c, fmt) "bcachefs (%s): " fmt "\n", ((_c)->name)
217 #define bch_info(c, fmt, ...) \
218 printk(KERN_INFO bch2_fmt(c, fmt), ##__VA_ARGS__)
219 #define bch_notice(c, fmt, ...) \
220 printk(KERN_NOTICE bch2_fmt(c, fmt), ##__VA_ARGS__)
221 #define bch_warn(c, fmt, ...) \
222 printk(KERN_WARNING bch2_fmt(c, fmt), ##__VA_ARGS__)
223 #define bch_err(c, fmt, ...) \
224 printk(KERN_ERR bch2_fmt(c, fmt), ##__VA_ARGS__)
226 #define bch_verbose(c, fmt, ...) \
228 if ((c)->opts.verbose_recovery) \
229 bch_info(c, fmt, ##__VA_ARGS__); \
232 /* Parameters that are useful for debugging, but should always be compiled in: */
233 #define BCH_DEBUG_PARAMS_ALWAYS() \
234 BCH_DEBUG_PARAM(key_merging_disabled, \
235 "Disables merging of extents") \
236 BCH_DEBUG_PARAM(btree_gc_always_rewrite, \
237 "Causes mark and sweep to compact and rewrite every " \
238 "btree node it traverses") \
239 BCH_DEBUG_PARAM(btree_gc_rewrite_disabled, \
240 "Disables rewriting of btree nodes during mark and sweep")\
241 BCH_DEBUG_PARAM(btree_gc_coalesce_disabled, \
242 "Disables coalescing of btree nodes") \
243 BCH_DEBUG_PARAM(btree_shrinker_disabled, \
244 "Disables the shrinker callback for the btree node cache")
246 /* Parameters that should only be compiled in in debug mode: */
247 #define BCH_DEBUG_PARAMS_DEBUG() \
248 BCH_DEBUG_PARAM(expensive_debug_checks, \
249 "Enables various runtime debugging checks that " \
250 "significantly affect performance") \
251 BCH_DEBUG_PARAM(debug_check_bkeys, \
252 "Run bkey_debugcheck (primarily checking GC/allocation "\
253 "information) when iterating over keys") \
254 BCH_DEBUG_PARAM(version_stress_test, \
255 "Assigns random version numbers to newly written " \
256 "extents, to test overlapping extent cases") \
257 BCH_DEBUG_PARAM(verify_btree_ondisk, \
258 "Reread btree nodes at various points to verify the " \
259 "mergesort in the read path against modifications " \
262 #define BCH_DEBUG_PARAMS_ALL() BCH_DEBUG_PARAMS_ALWAYS() BCH_DEBUG_PARAMS_DEBUG()
264 #ifdef CONFIG_BCACHEFS_DEBUG
265 #define BCH_DEBUG_PARAMS() BCH_DEBUG_PARAMS_ALL()
267 #define BCH_DEBUG_PARAMS() BCH_DEBUG_PARAMS_ALWAYS()
270 /* name, frequency_units, duration_units */
271 #define BCH_TIME_STATS() \
272 BCH_TIME_STAT(btree_node_mem_alloc, sec, us) \
273 BCH_TIME_STAT(btree_gc, sec, ms) \
274 BCH_TIME_STAT(btree_coalesce, sec, ms) \
275 BCH_TIME_STAT(btree_split, sec, us) \
276 BCH_TIME_STAT(btree_sort, ms, us) \
277 BCH_TIME_STAT(btree_read, ms, us) \
278 BCH_TIME_STAT(journal_write, us, us) \
279 BCH_TIME_STAT(journal_delay, ms, us) \
280 BCH_TIME_STAT(journal_blocked, sec, ms) \
281 BCH_TIME_STAT(journal_flush_seq, us, us)
283 #include "alloc_types.h"
284 #include "buckets_types.h"
285 #include "clock_types.h"
286 #include "io_types.h"
287 #include "journal_types.h"
288 #include "keylist_types.h"
289 #include "move_types.h"
290 #include "super_types.h"
292 /* 256k, in sectors */
293 #define BTREE_NODE_SIZE_MAX 512
296 * Number of nodes we might have to allocate in a worst case btree split
297 * operation - we split all the way up to the root, then allocate a new root.
299 #define btree_reserve_required_nodes(depth) (((depth) + 1) * 2 + 1)
301 /* Number of nodes btree coalesce will try to coalesce at once */
302 #define GC_MERGE_NODES 4U
304 /* Maximum number of nodes we might need to allocate atomically: */
305 #define BTREE_RESERVE_MAX \
306 (btree_reserve_required_nodes(BTREE_MAX_DEPTH) + GC_MERGE_NODES)
308 /* Size of the freelist we allocate btree nodes from: */
309 #define BTREE_NODE_RESERVE (BTREE_RESERVE_MAX * 2)
312 struct crypto_blkcipher;
316 GC_PHASE_SB_METADATA = BTREE_ID_NR + 1,
317 GC_PHASE_PENDING_DELETE,
327 struct bch_member_cpu {
328 u64 nbuckets; /* device size */
329 u16 first_bucket; /* index of first bucket used */
330 u16 bucket_size; /* sectors */
342 struct percpu_ref ref;
343 struct percpu_ref io_ref;
344 struct completion stop_complete;
345 struct completion offline_complete;
351 * Cached version of this device's member info from superblock
352 * Committed by bch2_write_super() -> bch_fs_mi_update()
354 struct bch_member_cpu mi;
356 char name[BDEVNAME_SIZE];
358 struct bcache_superblock disk_sb;
360 struct dev_group self;
362 /* biosets used in cloned bios for replicas and moving_gc */
363 struct bio_set replica_set;
365 struct task_struct *alloc_thread;
367 struct prio_set *disk_buckets;
370 * When allocating new buckets, prio_write() gets first dibs - since we
371 * may not be allocate at all without writing priorities and gens.
372 * prio_last_buckets[] contains the last buckets we wrote priorities to
373 * (so gc can mark them as metadata).
376 u64 *prio_last_buckets;
377 spinlock_t prio_buckets_lock;
378 struct bio *bio_prio;
381 * free: Buckets that are ready to be used
383 * free_inc: Incoming buckets - these are buckets that currently have
384 * cached data in them, and we can't reuse them until after we write
385 * their new gen to disk. After prio_write() finishes writing the new
386 * gens/prios, they'll be moved to the free list (and possibly discarded
389 DECLARE_FIFO(long, free)[RESERVE_NR];
390 DECLARE_FIFO(long, free_inc);
391 spinlock_t freelist_lock;
393 size_t fifo_last_bucket;
395 /* Allocation stuff: */
397 /* most out of date gen in the btree */
399 struct bucket *buckets;
400 unsigned short bucket_bits; /* ilog2(bucket_size) */
402 /* last calculated minimum prio */
406 * Bucket book keeping. The first element is updated by GC, the
407 * second contains a saved copy of the stats from the beginning
410 struct bch_dev_usage __percpu *usage_percpu;
411 struct bch_dev_usage usage_cached;
413 atomic_long_t saturated_count;
414 size_t inc_gen_needs_gc;
416 struct mutex heap_lock;
417 DECLARE_HEAP(struct bucket_heap_entry, heap);
420 struct task_struct *moving_gc_read;
422 struct bch_pd_controller moving_gc_pd;
425 struct write_point tiering_write_point;
427 struct write_point copygc_write_point;
429 struct journal_device journal;
431 struct work_struct io_error_work;
433 /* The rest of this all shows up in sysfs */
434 atomic64_t meta_sectors_written;
435 atomic64_t btree_sectors_written;
436 u64 __percpu *sectors_written;
440 * Flag bits for what phase of startup/shutdown the cache set is at, how we're
441 * shutting down, etc.:
443 * BCH_FS_UNREGISTERING means we're not just shutting down, we're detaching
444 * all the backing devices first (their cached data gets invalidated, and they
445 * won't automatically reattach).
448 BCH_FS_INITIAL_GC_DONE,
450 BCH_FS_WRITE_DISABLE_COMPLETE,
455 BCH_FS_FSCK_FIXED_ERRORS,
460 struct dentry *btree;
461 struct dentry *btree_format;
462 struct dentry *failed;
467 struct task_struct *migrate;
468 struct bch_pd_controller pd;
470 struct dev_group devs;
483 struct list_head list;
485 struct kobject internal;
486 struct kobject opts_dir;
487 struct kobject time_stats;
491 struct device *chardev;
492 struct super_block *vfs_sb;
495 /* ro/rw, add/remove devices: */
496 struct mutex state_lock;
497 enum bch_fs_state state;
499 /* Counts outstanding writes, for clean transition to read-only */
500 struct percpu_ref writes;
501 struct work_struct read_only_work;
503 struct bch_dev __rcu *devs[BCH_SB_MEMBERS_MAX];
505 struct bch_opts opts;
507 /* Updated by bch2_sb_update():*/
518 u8 meta_replicas_have;
519 u8 data_replicas_have;
529 struct bch_sb *disk_sb;
530 unsigned disk_sb_order;
532 unsigned short block_bits; /* ilog2(block_size) */
534 struct closure sb_write;
535 struct mutex sb_lock;
537 struct backing_dev_info bdi;
540 struct bio_set btree_read_bio;
542 struct btree_root btree_roots[BTREE_ID_NR];
543 struct mutex btree_root_lock;
545 bool btree_cache_table_init_done;
546 struct rhashtable btree_cache_table;
549 * We never free a struct btree, except on shutdown - we just put it on
550 * the btree_cache_freed list and reuse it later. This simplifies the
551 * code, and it doesn't cost us much memory as the memory usage is
552 * dominated by buffers that hold the actual btree node data and those
553 * can be freed - and the number of struct btrees allocated is
554 * effectively bounded.
556 * btree_cache_freeable effectively is a small cache - we use it because
557 * high order page allocations can be rather expensive, and it's quite
558 * common to delete and allocate btree nodes in quick succession. It
559 * should never grow past ~2-3 nodes in practice.
561 struct mutex btree_cache_lock;
562 struct list_head btree_cache;
563 struct list_head btree_cache_freeable;
564 struct list_head btree_cache_freed;
566 /* Number of elements in btree_cache + btree_cache_freeable lists */
567 unsigned btree_cache_used;
568 unsigned btree_cache_reserve;
569 struct shrinker btree_cache_shrink;
572 * If we need to allocate memory for a new btree node and that
573 * allocation fails, we can cannibalize another node in the btree cache
574 * to satisfy the allocation - lock to guarantee only one thread does
577 struct closure_waitlist mca_wait;
578 struct task_struct *btree_cache_alloc_lock;
580 mempool_t btree_reserve_pool;
583 * Cache of allocated btree nodes - if we allocate a btree node and
584 * don't use it, if we free it that space can't be reused until going
585 * _all_ the way through the allocator (which exposes us to a livelock
586 * when allocating btree reserves fail halfway through) - instead, we
587 * can stick them here:
590 struct open_bucket *ob;
592 } btree_reserve_cache[BTREE_NODE_RESERVE * 2];
593 unsigned btree_reserve_cache_nr;
594 struct mutex btree_reserve_cache_lock;
596 mempool_t btree_interior_update_pool;
597 struct list_head btree_interior_update_list;
598 struct mutex btree_interior_update_lock;
600 struct workqueue_struct *wq;
601 /* copygc needs its own workqueue for index updates.. */
602 struct workqueue_struct *copygc_wq;
605 struct bch_pd_controller foreground_write_pd;
606 struct delayed_work pd_controllers_update;
607 unsigned pd_controllers_update_seconds;
608 spinlock_t foreground_write_pd_lock;
609 struct bch_write_op *write_wait_head;
610 struct bch_write_op *write_wait_tail;
612 struct timer_list foreground_write_wakeup;
615 * These contain all r/w devices - i.e. devices we can currently
618 struct dev_group all_devs;
619 struct bch_tier tiers[BCH_TIER_MAX];
620 /* NULL if we only have devices in one tier: */
621 struct bch_tier *fastest_tier;
623 u64 capacity; /* sectors */
626 * When capacity _decreases_ (due to a disk being removed), we
627 * increment capacity_gen - this invalidates outstanding reservations
628 * and forces them to be revalidated
632 atomic64_t sectors_available;
634 struct bch_fs_usage __percpu *usage_percpu;
635 struct bch_fs_usage usage_cached;
636 struct lglock usage_lock;
638 struct mutex bucket_lock;
640 struct closure_waitlist freelist_wait;
643 * When we invalidate buckets, we use both the priority and the amount
644 * of good data to determine which buckets to reuse first - to weight
645 * those together consistently we keep track of the smallest nonzero
646 * priority of any bucket.
648 struct prio_clock prio_clock[2];
650 struct io_clock io_clock[2];
652 /* SECTOR ALLOCATOR */
653 struct list_head open_buckets_open;
654 struct list_head open_buckets_free;
655 unsigned open_buckets_nr_free;
656 struct closure_waitlist open_buckets_wait;
657 spinlock_t open_buckets_lock;
658 struct open_bucket open_buckets[OPEN_BUCKETS_COUNT];
660 struct write_point btree_write_point;
662 struct write_point write_points[WRITE_POINT_COUNT];
663 struct write_point promote_write_point;
666 * This write point is used for migrating data off a device
667 * and can point to any other device.
668 * We can't use the normal write points because those will
669 * gang up n replicas, and for migration we want only one new
672 struct write_point migration_write_point;
674 /* GARBAGE COLLECTION */
675 struct task_struct *gc_thread;
679 * Tracks GC's progress - everything in the range [ZERO_KEY..gc_cur_pos]
680 * has been marked by GC.
682 * gc_cur_phase is a superset of btree_ids (BTREE_ID_EXTENTS etc.)
684 * gc_cur_phase == GC_PHASE_DONE indicates that gc is finished/not
685 * currently running, and gc marks are currently valid
687 * Protected by gc_pos_lock. Only written to by GC thread, so GC thread
688 * can read without a lock.
690 seqcount_t gc_pos_lock;
691 struct gc_pos gc_pos;
694 * The allocation code needs gc_mark in struct bucket to be correct, but
695 * it's not while a gc is in progress.
697 struct rw_semaphore gc_lock;
700 struct bio_set bio_read;
701 struct bio_set bio_read_split;
702 struct bio_set bio_write;
703 struct mutex bio_bounce_pages_lock;
704 mempool_t bio_bounce_pages;
706 mempool_t lz4_workspace_pool;
707 void *zlib_workspace;
708 struct mutex zlib_workspace_lock;
709 mempool_t compression_bounce[2];
711 struct crypto_shash *sha256;
712 struct crypto_blkcipher *chacha20;
713 struct crypto_shash *poly1305;
715 atomic64_t key_version;
717 struct bio_list read_retry_list;
718 struct work_struct read_retry_work;
719 spinlock_t read_retry_lock;
722 wait_queue_head_t writeback_wait;
723 atomic_t writeback_pages;
724 unsigned writeback_pages_max;
725 atomic_long_t nr_inodes;
728 struct dentry *debug;
729 struct btree_debug btree_debug[BTREE_ID_NR];
730 #ifdef CONFIG_BCACHEFS_DEBUG
731 struct btree *verify_data;
732 struct btree_node *verify_ondisk;
733 struct mutex verify_lock;
736 u64 unused_inode_hint;
739 * A btree node on disk could have too many bsets for an iterator to fit
740 * on the stack - have to dynamically allocate them
744 mempool_t btree_bounce_pool;
746 struct journal journal;
748 unsigned bucket_journal_seq;
750 /* The rest of this all shows up in sysfs */
751 atomic_long_t cache_read_races;
753 unsigned foreground_write_ratelimit_enabled:1;
754 unsigned copy_gc_enabled:1;
755 unsigned tiering_enabled:1;
756 unsigned tiering_percent;
759 * foreground writes will be throttled when the number of free
760 * buckets is below this percentage
762 unsigned foreground_target_percent;
764 #define BCH_DEBUG_PARAM(name, description) bool name;
765 BCH_DEBUG_PARAMS_ALL()
766 #undef BCH_DEBUG_PARAM
768 #define BCH_TIME_STAT(name, frequency_units, duration_units) \
769 struct time_stats name##_time;
774 static inline bool bch2_fs_running(struct bch_fs *c)
776 return c->state == BCH_FS_RO || c->state == BCH_FS_RW;
779 static inline unsigned bucket_pages(const struct bch_dev *ca)
781 return ca->mi.bucket_size / PAGE_SECTORS;
784 static inline unsigned bucket_bytes(const struct bch_dev *ca)
786 return ca->mi.bucket_size << 9;
789 static inline unsigned block_bytes(const struct bch_fs *c)
791 return c->sb.block_size << 9;
794 #endif /* _BCACHE_H */