Creating custom programmatically-drawn components in iOS is a common problem in iOS design and development. However, the path toward optimization for these components becomes complicated very quickly - and there is a lot of conflicting advice out there. For the most part, people end up applying the CALayer shouldRasterize property as a cure-all for performance issues. However, CA rasterization is only as clever as it can be - it flushes the raster every time an instance is redrawn. Often times, bundled assets are used instead. I think there is a lot of value in building dynamic components rather than filling your app binary with bundled assets, so I decided to dig deeper and develop a more efficient pattern.
I was inspired by two recently published works: Reda Lemeden’s article Designing for iOS: Graphics and Performance, and Javier Soto’s MSCachedAsyncViewDrawing library.
The main point of inspiration from Reda’s article was what he called a ‘hybrid approach’ - essentially, using the iOS drawing stack to render image assets on the component level as static UIImages. At it’s core, this is very similar to rasterization, except that the generated image is drawn only once and stored in a shared location, instead of Core Animation storing a bitmap of each rendered instance of your component. This works great for components that are static in nature, but what about more complex components that having many properties affecting their appearance?
This question drove the development of a class called HTStateAwareRasterImageView, a rasterization system that caches rendered components based on their state, rendering new assets in response to key-value observation of developer-defined key paths affecting appearance. A component’s unique state, based on the values of it’s properties that affect appearance, is only ever drawn once in this system. Asynchronous rendering is an added bonus, thanks to MSCachedAsyncViewDrawing. The bundled demo project compares the same component in a table using no rasterization, Core Animation rasterization, and HTStateAwareRasterImageView, demonstrating a dramatic performance enhancement, especially on older devices (I’m testing using a 4th gen iPod). The usage looks like this, from the github readme:
Start by conforming to the HTRasterizableView protocol. A simple example is provided in the demo project (HTExampleRasterizableComponent). The single required method is:
- (NSArray *)keyPathsThatAffectState;
This is used for two purposes:
- To key-value observe the specified key paths to trigger image regeneration
- To generate a hash of the component’s state
Initialize a HTStateAwareRasterImageView and set the rasterizableView property to your HTRasterizableView, like this snippet from the demo project:
_rasterizableComponent = [[HTExampleRasterizableComponent alloc] init]; _stateAwareRasterImageView = [[HTStateAwareRasterImageView alloc] init]; _stateAwareRasterImageView.rasterizableView = _rasterizableComponent; _stateAwareRasterImageView.delegate = self; [self addSubview:_stateAwareRasterImageView];
If your component can take advantage of UIImage caps (fixed-size corners and stretchable center), these two methods are optional on the HTRasterizableView protocol:
- (UIEdgeInsets)capEdgeInsets; - (BOOL)useMinimumFrameForCaps;
If you respond YES to userMinimumFrameForCaps, the component is rendered at it’s cumulative cap sizes plus 1pt horizontally and vertically, drastically reducing render time in many applications.
You can specify if you want drawing to occur synchronously on the main thread:
@property (nonatomic, assign) BOOL drawsOnMainThread;
You can also turn off keypath observing if you want to manually regenerate images (use this for pre-rendering assets):
@property (nonatomic, assign) BOOL kvoEnabled; // For prerendering only - (void)regenerateImage:(HTSARIVVoidBlock)complete;
A delegate property is also available to let you know when it’s regenerating an image, and when it gets a new image back:
@property (atomic, assign) id<HTStateAwareRasterImageViewDelegate> delegate;
For debugging purposes, the cache key is available through this method.
- (NSString *)cacheKey;
The demo project has four tabs:
- A tableview taking advantage of HTStateAwareRasterImageView
- A tableview that displays cache key, actual size and cell-height sized cached images
- A tableview that uses the same component without rasterization
- A tableview that uses the same component with Core Animation rasterization enabled
The cache key used to define the state of your component is generated by the NSObject+HTPropertyHash category. It is important that the hash method produces a string that is unique to the state of your properties, but not TOO unique by including things like pointer values. The exception for CGColorRef in that category is made because we only want the RGBA values described, not the pointer value plus the RGBA values. Other exceptions may be required, depending on the application.
A limitation of this approach (and all approaches that render CALayers to image contexts) is that CALayer renderInContext, used for drawing the layer to a graphics context, does not support the CALayer mask property, as well as some other exceptions. The category included in the project, UIView+HTDrawInContext, provides a workaround for CALayer masks, but only for the root layer of the rasterized component. From the CALayer doc,
… this method does not support the entire Core Animation composition model.
QTMovieLayerlayers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
If you decide to give HTStateAwareRasterImageView a try, pull requests and feedback are very welcome (submit an issue on github). Good luck!
Interested in building something great?
Join us in building the worlds most loved hotel app.
View our open engineering positions.