Similar to the well-known repository cache and the lesser-known cache droplet, there exists a third type of cache that all ATG engineers should know about.
The cache I’m referring to can be used to cache anything in memory. The cache droplet does this, but it has the limitation of having to be invoked from a jsp page. This other cache decouples the front-end dependency on the cache.
Use Cases and Benefits
Anytime you need to query a 3rd party for information, that data can be cached. Some examples to consider:
- Your store locator functionality is driven via a 3rd party API that the application interfaces with. There’s no need to make the HTTP call each time for something as static as a store location.
- “Track Your Order” functionality can be carefully cached because your order status isn’t going to change so much that it requires a new HTTP call after one minute or something.
You use this type of cache when you’re interfacing with a 3rd party API and the data you’re getting isn’t changing so much that it cannot be cached. Data associated with checkout wouldn’t be cached because it is unique and very time sensitive.
You also only use this type of cache when the integration isn’t so tightly coupled with ATG, where data is loaded directly into a SQL repository.
Before considering the cache design, you code might look like:
After adding the cache, the design will look like:
There are two new elements that you’re adding in:
1.Cache – the umbrella ATG class that manages the cache;
2.CacheAdapter – this is what the Cache class subcontracts to for retrieving an item with the given key.
It’s really easy once these two new elements are understood. Here are the full steps:
1. Create a component, XYZCache.
$description=Cachefor xyz items
# cache settings (added in later)
You don’t need to implement any Java class for this, unless you’d like to override the default cache behavior (not recommended / really useful).
2. Create the cache adapter by creating a new class that extends. atg.nucleus.GenericService and implements atg.service.cache.CacheAdapter. You’ll need to implement the following methods as defined in the CacheAdapter interface:
Object getCacheElement(Object pKey)throwsException
Object[fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”] getCacheElements(Object pKeys)throwsException
int getCacheKeySize(Object pKey)
int getCacheElementSize(Object pValue,Object pKey)
void removeCacheElement(Object pValue, pKey)
To be honest, 95% of the time you really only need to implement getCacheElement. The removeCacheElement method is only used “to allow the adapter to do any related clean up, if necessary, when an element is removed from the cache”, and the getCacheKeySize and getCacheElementSize methods are only used if you’re caching based on cache size (memory), which isn’t recommended (see discussion on that below).
From the getCacheElement method, you need to connect with your original service that retrieves the value to be cached. Don’t put all of this logic in the cache adapter, delegate it to a separate class!
3. Create a component for the cache adapter that was just created. It should be globally scoped, and doesn’t need any special configurations.
4. Reroute existing code through the cache.
Now that the cache is built, we need to use it to direct traffic. This can be done by calling:
Where getCache resolves to the Cache component we created earlier. You’ll need to come up with a logical key-naming scheme.
If the cache does not have an entry for the key, it reaches to the cache adapter’s getCacheElement method for retrieval. If an entry already exists, it applies logic based on your caching configurations and will either return the cached entry, or get a fresh entry.
Tuning the cache once it has been created is the most important task. Tuning a cache wrongly can really mess things up. In the cache class, you’ll find these relevant cache properties:
maximumCacheEntries – the maximum number of entries to keep in the cache;
maximumEntryLifetime – how long in ms to keep an entry in the cache, can be set to 0 for a no-caching setup, or -1 for infinitely caching the items;
maximumCacheSize – cap the cache at a finite size of memory;
maximumEntrySize – cap the size of each entry in the cache.
I’d not recommend tuning your cache by its memory footprint, here’s why:
- Your application cluster’s individual boxes may not all have identical hardware, which would complicate the cache sizing for each JVM;
- Nobody will likely have enough knowledge of each to tune the cache with the underlying hardware (unless an in-house hosting situation, which is rare).
Tuning by the number and lifetime of cache elements usually makes the most sense. You can use a combo of the maximumCacheEntries and maximumEntryLifetime properties to accomplish this.
Keep in mind that the out-of-the-box ATG Cache is a LRU cache, meaning the cache is going to fill up to its capacity, and then start discarding the least-used cache entries to make room for the newer ones.
Writing a custom cache can really help an ATG application’s performance. As far as I’ve found, there doesn’t seem to be any official documentation available for writing your own implementation. That’s unfortunate, because many people could benefit if it had a spot in the Commerce Programming Guide or something similar. As such, most of the information here has been from the javadocs and my professional experience of writing these caches.