v0.12.0 had major breaking changes on the API and internal behavior. This section describes the code changes required to migrate to v0.12.0.
-
sync
caches are no longer enabled by default: Please use a crate featuresync
to enable it. -
No more background threads: All cache types
future::Cache
,sync::Cache
, andsync::SegmentedCache
no longer spawn background threads.- The
scheduled-thread-pool
crate was removed from the dependency. - Because of this change, many private methods and some public methods under the
future
module were converted toasync
methods. You will need to add.await
to your code for those methods.
- The
-
Immediate notification delivery: The
notification::DeliveryMode
enum for the eviction listener was removed. Now all cache types behave as if theImmediate
delivery mode is specified.DeliveryMode
enum had two variantsImmediate
andQueued
.- The former should be easier to use than other as it guarantees to preserve the order of events on a given cache key.
- The latter did not use internal locks and would provide higher performance under heavy cache writes.
- Now all cache types work as if the
Immediate
mode is specified.future::Cache
: In earlier versions offuture::Cache
, the queued mode was used. Now it behaves as if the immediate mode is specified.sync
caches: From earlier versions ofsync::Cache
andsync::SegmentedCache
, the immediate mode is the default mode. So this change should only affects those of you who are explicitly using the queued mode.
- The queued mode was implemented by using a background thread. The queued mode was removed because there is no thread pool available anymore.
- If you need the queued mode back, please file a GitHub issue. We could provide a way to use a user supplied thread pool.
The following sections will describe about the changes you might need to make to your code.
- Please use a crate feature
sync
to enablesync
caches. - Since the background threads were removed, the maintenance tasks such as removing
expired entries are not executed periodically anymore.
- The
thread_pool_enabled
method of thesync::CacheBuilder
was removed. The thread pool is always disabled. - See the maintenance tasks section for more details.
- The
- The
sync
method of thesync::ConcurrentCacheExt
trait was moved tosync::Cache
andsync::SegmentedCache
types. It is also renamed torun_pending_tasks
. - Now
sync
caches always work as if the immediate delivery mode is specified for the eviction listener.- In older versions, the immediate mode was the default mode, and the queued mode could be optionally selected.
- The
get
method is nowasync fn
, so you mustawait
for the result. - The
blocking
method was removed.- Please use async runtime's blocking API instead.
- See the replacing the blocking API section for more details.
- Now the
or_insert_with_if
method of the entry API requiresSend
bound for thereplace_if
closure. - The
eviction_listener_with_queued_delivery_mode
method offuture::CacheBuilder
was removed.- Please use one of the new methods instead:
eviction_listener
async_eviction_listener
- See the updating the eviction listener section for more details.
- Please use one of the new methods instead:
- The
sync
method of thefuture::ConcurrentCacheExt
trait was moved tofuture::Cache
type and renamed torun_pending_tasks
. It was also changed toasync fn
.
- Since the background threads were removed, the maintenance tasks such as removing
expired entries are not executed periodically anymore.
- See the maintenance tasks section for more details.
- Now
future::Cache
always behaves as if the immediate delivery mode is specified for the eviction listener.- In older versions, the queued delivery mode was used.
The blocking
method of future::Cache
was removed. Please use async runtime's
blocking API instead.
Tokio
- Call the
tokio::runtime::Handle::current()
method in async context to obtain a handle to the current Tokio runtime. - From outside async context, call cache's async function using
block_on
method of the runtime.
use std::sync::Arc;
#[tokio::main]
async fn main() {
// Create a future cache.
let cache = Arc::new(moka::future::Cache::new(100));
// In async context, you can obtain a handle to the current Tokio runtime.
let rt = tokio::runtime::Handle::current();
// Spawn an OS thread. Pass the handle and cache.
let thread = {
let cache = Arc::clone(&cache);
std::thread::spawn(move || {
// Call async function using block_on method of Tokio runtime.
rt.block_on(cache.insert(0, 'a'));
})
};
// Wait for the threads to complete.
thread.join().unwrap();
// Check the result.
assert_eq!(cache.get(&0).await, Some('a'));
}
async-std
- From outside async context, call cache's async function using the
async_std::task::block_on
method.
use std::sync::Arc;
#[async_std::main]
async fn main() {
// Create a future cache.
let cache = Arc::new(moka::future::Cache::new(100));
// Spawn an OS thread. Pass the cache.
let thread = {
let cache = Arc::clone(&cache);
std::thread::spawn(move || {
use async_std::task::block_on;
// Call async function using block_on method of async_std.
block_on(cache.insert(0, 'a'));
})
};
// Wait for the threads to complete.
thread.join().unwrap();
// Check the result.
assert_eq!(cache.get(&0).await, Some('a'));
}
The eviction_listener_with_queued_delivery_mode
method of future::CacheBuilder
was removed. Please use one of the new methods eviction_listener
or
async_eviction_listener
instead.
The eviction_listener
method takes the same closure as the old method. If you do
not need to .await
anything in the eviction listener, use this method.
This code snippet is borrowed from an example in the document of
future::Cache
:
let eviction_listener = |key, _value, cause| {
println!("Evicted key {key}. Cause: {cause:?}");
};
let cache = Cache::builder()
.max_capacity(100)
.expire_after(expiry)
.eviction_listener(eviction_listener)
.build();
The async_eviction_listener
takes a closure that returns a Future
. If you need to
await
something in the eviction listener, use this method. The actual return type
of the closure is notification::ListenerFuture
, which is a type alias of
Pin<Box<dyn Future<Output = ()> + Send>>
. You can use the boxed
method of
future::FutureExt
trait to convert a regular Future
into this type.
This code snippet is borrowed from an example in the document of
future::Cache
:
use moka::notification::ListenerFuture;
// FutureExt trait provides the boxed method.
use moka::future::FutureExt;
let eviction_listener = move |k, v: PathBuf, cause| -> ListenerFuture {
println!("\n== An entry has been evicted. k: {k:?}, v: {v:?}, cause: {cause:?}");
let file_mgr2 = Arc::clone(&file_mgr1);
// Create a Future that removes the data file at the path `v`.
async move {
// Acquire the write lock of the DataFileManager.
let mut mgr = file_mgr2.write().await;
// Remove the data file. We must handle error cases here to
// prevent the listener from panicking.
if let Err(_e) = mgr.remove_data_file(v.as_path()).await {
eprintln!("Failed to remove a data file at {v:?}");
}
}
// Convert the regular Future into ListenerFuture. This method is
// provided by moka::future::FutureExt trait.
.boxed()
};
// Create the cache. Set time to live for two seconds and set the
// eviction listener.
let cache = Cache::builder()
.max_capacity(100)
.time_to_live(Duration::from_secs(2))
.async_eviction_listener(eviction_listener)
.build();
In older versions, the maintenance tasks needed by the cache were periodically
executed in background by a global thread pool managed by moka
. Now all cache types
do not use the thread pool anymore, so those maintenance tasks are executed
sometimes in foreground when certain cache methods (get
, get_with
, insert
,
etc.) are called by user code.
Figure 1. The lifecycle of cached entries
These maintenance tasks include:
- Determine whether to admit a "temporary admitted" entry or not.
- Apply the recording of cache reads and writes to the internal data structures for the cache policies, such as the LFU filter, LRU queues, and hierarchical timer wheels.
- When cache's max capacity is exceeded, remove least recently used (LRU) entries.
- Remove expired entries.
- Find and remove the entries that have been invalidated by the
invalidate_all
orinvalidate_entries_if
methods. - Deliver removal notifications to the eviction listener. (Call the eviction listener closure with the information about the evicted entry)
They will be executed in the following cache methods when one of the following conditions is met:
Cache Methods:
- All cache write methods:
insert
,get_with
,invalidate
, etc., except forinvalidate_all
andinvalidate_entries_if
. - Some of the cache read methods:
get
run_pending_tasks
method, which executes the pending maintenance tasks explicitly.
Conditions:
- When one of the numbers of pending read and write recordings exceeds the threshold.
- The threshold is currently hard-coded to 64 items.
- When the time since the last execution of the maintenance tasks exceeds the
threshold.
- The threshold is currently hard-coded to 300 milliseconds.
You can execute the pending maintenance tasks explicitly by calling the
run_pending_tasks
method. This method is available for all cache types.
Note that cache read methods such as the get
, get_with
and contains_key
never
return expired entries although they are not removed immediately from the cache when
they expire. You will not need to call run_pending_tasks
method to remove expired
entries unless you want to remove them immediately (e.g. to free some resources).