No such function exists in the iOS frameworks, and there are two big concepts needed to understand the reason why, and what to do instead.
The first concept is asynchronous event-driven programming. In this software paradigm, an app doesn't tell the OS what to do (such as draw something now). Instead your app has to declare a method, and requests the OS to call that method back at some later time. The OS then calls back that method when it is good and ready, which is very often later, not now.
So, on iOS, your app doesn't just draw when it wants to draw. You app has to instead politely request that the OS to call some a particular method, such as a drawRect, by doing a setNeedsDisplay request on the desired view or subview. The OS usually complies with this request later, but only after you have exited the current code that your app is running. And the OS won't comply with your request more often than the frame rate (usually 60 Hz). Only at those times when the OS chooses to call your drawRect does a view's context actually exist for any drawing.
The second concept is the need to be able to redraw everything in view, even if one only wants to update one dot on top of something drawn a few seconds ago. You can't just draw a dot or line and expect it to show up on top of much earlier content, stuff that which was drawn beforehand in another callback. Even though you can see it on the display, nothing drawn one frame beforehand was actually saved for reuse as far as an iOS app is concerned.
In the old days of computing, when computers were the size of refrigerators or larger, there were no glass display screens. All human readable output was via teletypes or line-printers, very often in another room from the computer. Your computer output was printed on paper, and then fed out the top. If you needed to add a word, you didn't try to get the computer to somehow modify an existing printed sheet of paper with white-out or something, you just reprinted the entire page. Or maybe you could stick a post-it note on top of the earlier printed page. Whereas on early personal computers, a program might be able to change bits on a bitmapped memory display at any time. That doesn't work under iOS.
iOS view drawing is more similar to the much older page printing model. The display's bitmap isn't in the same memory as the app. Your app sends a UIView's graphics contents off to the GPU somewhere else on the chip. If you want any change anything, you need to send the entire view to the graphics logic again. There's no way to directly add a pixel or line to an existing view that has already been displayed. This is because an iOS device's display is connected via a really complicated GPU on another part of the chip, and this connection is pretty much opaque, e.g. write-only, just as if it were in another room via an output-only printer cable.
So how do you draw the usually procedural code way (e.g. add some dots and lines now, and some more dots and lines to that same view later)? By having your app allocate its own bitmap memory and context, and drawing into that context using Core Graphics. You can then send that entire bitmap to update a view as needed.
First, we need some instance variables in your view's interface declaration. (These could be global variables for a really tiny app where the code is not meant for reuse or code review.)
unsigned char *myBitmap;
CGContextRef myDrawingContext;
CGRect myBitmapRect;
Note that if you need more than one drawing context, each drawing context will require it's own bitmap.
Here's how to create your own bitmap drawing context:
- (void)makeMyBitMapContextWithRect:(CGRect)r {
int h = r.size.height;
int w = r.size.width;
int bitsPerPixel = 8;
int rowBytes = 4 * w; // for 32-bit ARGB format
int myBitmapSize = rowBytes * h; // memory needed
myBitmap = (unsigned char *)malloc(myBitmapSize);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
if (myBitmap != NULL) {
// clear bitmap to white
memset(myBitmap, 0xff, myBitmapSize);
myDrawingContext = CGBitmapContextCreate(myBitmap,
w, h, bitsPerPixel, rowBytes,
colorspace,
kCGImageAlphaPremultipliedFirst );
myBitmapRect = r;
}
CGColorSpaceRelease(colorspace);
}
Here's how to draw a dot in your bitmap context:
- (void)drawRedDotInMyContextAtPoint:(CGPoint)pt {
float x = pt.x;
float y = pt.y;
float dotSize = 1.0;
CGRect r1 = CGRectMake(x,y, dotSize,dotSize);
CGContextSetRGBFillColor(myDrawingContext, 1.0, 0, 0, 1.0);
// draw a red dot in this context
CGContextFillRect(myDrawingContext, r1);
}
But whatever drawing you do in your bitmap context will initially be invisible. You still have to get the bitmap onto the iOS device's display. There are two ways to do this. Here's how to draw your bitmap context into a view during the view's drawRect:
// call this from drawRect with the drawRect's current context
- (void)drawMyBitmap:(CGContextRef)context {
CGImageRef myImage = CGBitmapContextCreateImage(myDrawingContext);
CGContextDrawImage(context, myBitmapRect, myImage );
CGImageRelease(myImage);
}
Call setNeedDisplay on your view (which needs to be an instance of your subclass of a UIView), and the OS will call the view's drawRect, which can then call the above update method.
If your bitmap drawing context is the exact same size as your view, here's how to update an entire UIView outside of a drawRect.
- (void)useMyBitmapContextAsViewLayer {
CGImageRef myImage = CGBitmapContextCreateImage(myDrawingContext);
// use the following only inside a UIView's implementation
UIView *myView = self;
CALayer *myLayer = [ myView layer ];
[ myLayer setContents : (id)myImage ];
CGImageRelease(myImage);
// notify OS that a drawing layer has changed
// [ CATransaction flush ];
}
You only have to do a CATransaction flush if you are not calling setNeedsDisplay on your view, for instance, if you are doing drawing in a background thread.
Note that a view update isn't needed after every single item drawn. Since the actual hardware display never changes more often than the frame rate, which is no more than 60 Hz, a view update is needed no more than once every frame time. So don't call setNeedsDisplay, or the CALayer update method, more than once every 1/60th of a second. Perhaps use a timer method in an animation loop at a known frame rate to call this, and only if the bitmap is dirty.
Added (2012May22):
So why isn't this the regular paradigm to do drawing on iOS devices? What's the disadvantage?
There are at least three big disadvantages of this method. One is that the bitmap can be huge and use up a lot of memory, which is in limited supply to apps on iOS devices. Second is that you may need to adapt your bitmap size to the resolution and scale of the device. Or if you use an iPhone bitmap on an iPad, or a regular scale bitmap on a Retina device, the resulting image may look blurry or pixel-ly instead of sharp. Lastly, this method can be very slow, resulting in a low frame rate or poor app performance, as a huge bitmap needs to be reformatted and copied to the GPU every time your app needs to update the display, which can take a lot longer than just sending a few line or point coordinates if your drawing isn't too complicated. Apple has optimized their recommended graphics flow (drawing in a view's drawRect) for lower memory usage, better device independence, and good UI responsiveness, all of which benefit iOS device users, even if that makes things more difficult for an iOS app developer.