Event Handling Guide For IOS
Event Handling Guide For IOS
for iOS
Contents
Gesture Recognizers 10
Use Gesture Recognizers to Simplify Event Handling 10
Built-in Gesture Recognizers Recognize Common Gestures 11
Gesture Recognizers Are Attached to a View 11
Gestures Trigger Action Messages 11
Responding to Events with Gesture Recognizers 12
Using Interface Builder to Add a Gesture Recognizer to Your App 13
Adding a Gesture Recognizer Programmatically 13
Responding to Discrete Gestures 14
Responding to Continuous Gestures 16
Defining How Gesture Recognizers Interact 17
Gesture Recognizers Operate in a Finite State Machine 17
Interacting with Other Gesture Recognizers 19
Interacting with Other User Interface Controls 22
Gesture Recognizers Interpret Raw Touch Events 23
An Event Contains All the Touches for the Current Multitouch Sequence 23
An App Receives Touches in the Touch-Handling Methods 24
Regulating the Delivery of Touches to Views 25
Gesture Recognizers Get the First Opportunity to Recognize a Touch 25
Affecting the Delivery of Touches to Views 26
Creating a Custom Gesture Recognizer 27
Implementing the Touch-Event Handling Methods for a Custom Gesture Recognizer 28
Resetting a Gesture Recognizer’s State 30
2
Contents
Multitouch Events 37
Creating a Subclass of UIResponder 37
Implementing the Touch-Event Handling Methods in Your Subclass 38
Tracking the Phase and Location of a Touch Event 39
Retrieving and Querying Touch Objects 39
Handling Tap Gestures 42
Handling Swipe and Drag Gestures 42
Handling a Complex Multitouch Sequence 45
Specifying Custom Touch Event Behavior 49
Intercepting Touches by Overriding Hit-Testing 51
Forwarding Touch Events 51
Best Practices for Handling Multitouch Events 53
Motion Events 55
Getting the Current Device Orientation with UIDevice 55
Detecting Shake-Motion Events with UIEvent 57
Designating a First Responder for Motion Events 57
Implementing the Motion-Handling Methods 57
Setting and Checking Required Hardware Capabilities for Motion Events 58
Capturing Device Movement with Core Motion 59
Choosing a Motion Event Update Interval 60
Handling Accelerometer Events Using Core Motion 61
Handling Rotation Rate Data 63
Handling Processed Device Motion Data 65
3
Figures, Tables, and Listings
Gesture Recognizers 10
Figure 1-1 A gesture recognizer attached to a view 10
Figure 1-2 Discrete and continuous gestures 12
Figure 1-3 State machines for gesture recognizers 18
Figure 1-4 A multitouch sequence and touch phases 24
Figure 1-5 Default delivery path for touch events 25
Figure 1-6 Sequence of messages for touches 26
Table 1-1 Gesture recognizer classes of the UIKit framework 11
Listing 1-1 Adding a gesture recognizer to your app with Interface Builder 13
Listing 1-2 Creating a single tap gesture recognizer programmatically 13
Listing 1-3 Handling a double tap gesture 14
Listing 1-4 Responding to a left or right swipe gesture 15
Listing 1-5 Responding to a rotation gesture 16
Listing 1-6 Pan gesture recognizer requires a swipe gesture recognizer to fail 20
Listing 1-7 Preventing a gesture recognizer from receiving a touch 21
Listing 1-8 Implementation of a checkmark gesture recognizer 28
Listing 1-9 Resetting a gesture recognizer 30
Multitouch Events 37
Figure 3-1 Relationship of a UIEvent object and its UITouch objects 39
Figure 3-2 All touches for a given touch event 40
Figure 3-3 All touches belonging to a specific window 41
Figure 3-4 All touches belonging to a specific view 41
Figure 3-5 Restricting event delivery with an exclusive-touch view 50
Listing 3-1 Detecting a double tap gesture 42
Listing 3-2 Tracking a swipe gesture in a view 43
Listing 3-3 Dragging a view using a single touch 44
Listing 3-4 Storing the beginning locations of multiple touches 46
Listing 3-5 Retrieving the initial locations of touch objects 46
Listing 3-6 Handling a complex multitouch sequence 47
4
Figures, Tables, and Listings
Listing 3-7 Determining when the last touch in a multitouch sequence has ended 49
Listing 3-8 Forwarding touch events to helper responder objects 52
Motion Events 55
Figure 4-1 The accelerometer measures velocity along the x, y, and z axes 61
Figure 4-2 The gyroscope measures rotation around the x, y, and z axes 63
Table 4-1 Common update intervals for acceleration events 60
Listing 4-1 Responding to changes in device orientation 56
Listing 4-2 Becoming first responder 57
Listing 4-3 Handling a motion event 58
Listing 4-4 Accessing accelerometer data in MotionGraphs 62
Listing 4-5 Accessing gyroscope data in MotionGraphs 64
Listing 4-6 Starting and stopping device motion updates 67
Listing 4-7 Getting the change in attitude prior to rendering 68
5
About Events in iOS
Users manipulate their iOS devices in a number of ways, such as touching the screen or shaking the device.
iOS interprets when and how a user is manipulating the hardware and passes this information to your app.
The more your app responds to actions in natural and intuitive ways, the more compelling the experience is
for the user.
At a Glance
Events are objects sent to an app to inform it of user actions. In iOS, events can take many forms: Multi-Touch
events, motion events, and events for controlling multimedia. This last type of event is known as a remote
control event because it can originate from an external accessory.
6
About Events in iOS
At a Glance
Gesture recognizers provide a higher-level abstraction for complex event handling logic. Gesture recognizers
are the preferred way to implement touch-event handling in your app because gesture recognizers are powerful,
reusable, and adaptable. You can use one of the built-in gesture recognizers and customize its behavior. Or
you can create your own gesture recognizer to recognize a new gesture.
Relevant Chapters: “Multitouch Events” (page 37), “Motion Events” (page 55), and “Remote Control
Events” (page 69)
7
About Events in iOS
Prerequisites
Motion events come in different forms, and you can handle them using different frameworks. When users
shake the device, UIKit delivers a UIEvent object to an app. If you want your app to receive high-rate, continuous
accelerometer and gyroscope data, use the Core Motion framework.
Prerequisites
This document assumes that you are familiar with:
● The basic concepts of iOS app development
● The different aspects of creating your app’s user interface
● How views and view controllers work, and how to customize them
If you are not familiar with those concepts, start by reading Start Developing iOS Apps Today . Then, be sure to
read either View Programming Guide for iOS or View Controller Programming Guide for iOS , or both.
8
About Events in iOS
See Also
See Also
In the same way that iOS devices provide touch and device motion data, most iOS devices have GPS and
compass hardware that generates low-level data that your app might be interested in. Location and Maps
Programming Guide discusses how to receive and handle location data.
For advanced gesture recognizer techniques such as curve smoothing and applying a low-pass filter, see WWDC
2012: Building Advanced Gesture Recognizers .
Many sample code projects in the iOS Reference Library have code that uses gesture recognizers and handles
events. Among these are the following projects:
● Simple Gesture Recognizers is a perfect starting point for understanding gesture recognition. This app
demonstrates how to recognize tap, swipe, and rotate gestures. The app responds to each gesture by
displaying and animating an image at the touch location.
● HandlingTouchesUsingResponderMethodsandGestureRecognizers includes two projects that demonstrate
how to handle multiple touches to drag squares around onscreen. One version uses gesture recognizers,
and the other uses custom touch-event handling methods. The latter version is especially useful for
understanding touch phases because it displays the current touch phase onscreen as the touches occur.
● MoveMe shows how to animate a view in response to touch events. Examine this sample project to further
your understanding of custom touch-event handling.
9
Gesture Recognizers
Gesture recognizers convert low-level event handling code into higher-level actions. They are objects that you
attach to a view, which allows the view to respond to actions the way a control does. Gesture recognizers
interpret touches to determine whether they correspond to a specific gesture, such as a swipe, pinch, or
rotation. If they recognize their assigned gesture, they send an action message to a target object. The target
object is typically the view’s view controller, which responds to the gesture as shown in Figure 1-1. This design
pattern is both powerful and simple; you can dynamically determine what actions a view responds to, and you
can add gesture recognizers to a view without having to subclass the view.
myGestureRecognizer
If you want your app to recognize a unique gesture, such as a checkmark or a swirly motion, you can create
your own custom gesture recognizer. To learn how to design and implement your own gesture recognizer,
see “Creating a Custom Gesture Recognizer” (page 27).
10
Gesture Recognizers
Use Gesture Recognizers to Simplify Event Handling
Your app should respond to gestures only in ways that users expect. For example, a pinch should zoom in and
out whereas a tap should select something. For guidelines about how to properly use gestures, see “Apps
Respond to Gestures, Not Clicks” in iOS Human Interface Guidelines .
11
Gesture Recognizers
Responding to Events with Gesture Recognizers
12
Gesture Recognizers
Responding to Events with Gesture Recognizers
After you create the gesture recognizer object, you need to create and connect an action method. This method
is called whenever the connected gesture recognizer recognizes its gesture. If you need to reference the gesture
recognizer outside of this action method, you should also create and connect an outlet for the gesture recognizer.
Your code should look similar to Listing 1-1.
Listing 1-1 Adding a gesture recognizer to your app with Interface Builder
@interface APLGestureRecognizerViewController ()
@end
@implementation
- (IBAction)displayGestureForTapRecognizer:(UITapGestureRecognizer *)recognizer
@end
If you create a gesture recognizer programmatically, you need to attach it to a view using the
addGestureRecognizer: method. Listing 1-2 creates a single tap gesture recognizer, specifies that one tap
is required for the gesture to be recognized, and then attaches the gesture recognizer object to a view. Typically,
you create a gesture recognizer in your view controller’s viewDidLoad method, as shown in Listing 1-2.
- (void)viewDidLoad {
[super viewDidLoad];
13
Gesture Recognizers
Responding to Events with Gesture Recognizers
initWithTarget:self action:@selector(respondToTapGesture:)];
tapRecognizer.numberOfTapsRequired = 1;
[self.view addGestureRecognizer:tapRecognizer];
// Do any additional setup after loading the view, typically from a nib
Note: The next three code examples are from the Simple Gesture Recognizers sample code project,
which you can examine for more context.
- (IBAction)showGestureForTapRecognizer:(UITapGestureRecognizer *)recognizer {
14
Gesture Recognizers
Responding to Events with Gesture Recognizers
self.imageView.alpha = 0.0;
}];
Each gesture recognizer has its own set of properties. For example, in Listing 1-4, the
showGestureForSwipeRecognizer: method uses the swipe gesture recognizer’s direction property to
determine if the user swiped to the left or to the right. Then, it uses that value to make an image fade out in
the same direction as the swipe.
- (IBAction)showGestureForSwipeRecognizer:(UISwipeGestureRecognizer *)recognizer
{
if (recognizer.direction == UISwipeGestureRecognizerDirectionLeft) {
location.x -= 220.0;
} else {
location.x += 220.0;
// Animate the image view in the direction of the swipe as it fades out
self.imageView.alpha = 0.0;
self.imageView.center = location;
}];
15
Gesture Recognizers
Responding to Events with Gesture Recognizers
Listing 1-5 displays a “Rotate” image at the same rotation angle as the gesture, and when the user stops rotating,
animates the image so it fades out in place while rotating back to horizontal. As the user rotates his fingers,
the showGestureForRotationRecognizer: method is called continually until both fingers are lifted.
- (IBAction)showGestureForRotationRecognizer:(UIRotationGestureRecognizer
*)recognizer {
self.imageView.transform = transform;
self.imageView.alpha = 0.0;
self.imageView.transform = CGAffineTransformIdentity;
}];
16
Gesture Recognizers
Defining How Gesture Recognizers Interact
Each time the method is called, the image is set to be opaque in the drawImageForGestureRecognizer:
method. When the gesture is complete, the image is set to be transparent in the animateWithDuration:
method. The showGestureForRotationRecognizer: method determines whether a gesture is complete
by checking the gesture recognizer’s state. These states are explained in more detail in “Gesture Recognizers
Operate in a Finite State Machine” (page 17).
17
Gesture Recognizers
Defining How Gesture Recognizers Interact
multitouch sequences that they receive, and during analysis they either recognize or fail to recognize a gesture.
Failing to recognize a gesture means the gesture recognizer transitions to the Failed state
(UIGestureRecognizerStateFailed).
When a discrete gesture recognizer recognizes its gesture, the gesture recognizer transitions from Possible to
Recognized (UIGestureRecognizerStateRecognized) and the recognition is complete.
For continuous gestures, the gesture recognizer transitions from Possible to Began
(UIGestureRecognizerStateBegan) when the gesture is first recognized. Then, it transitions from Began
to Changed (UIGestureRecognizerStateChanged), and continues to move from Changed to Changed as
the gesture occurs. When the user’s last finger is lifted from the view, the gesture recognizer transitions to the
Ended state (UIGestureRecognizerStateEnded) and the recognition is complete. Note that the Ended
state is an alias for the Recognized state.
A recognizer for a continuous gesture can also transition from Changed to Canceled
(UIGestureRecognizerStateCancelled) if it decides that the gesture no longer fits the expected pattern.
18
Gesture Recognizers
Defining How Gesture Recognizers Interact
Every time a gesture recognizer changes state, the gesture recognizer sends an action message to its target,
unless it transitions to Failed or Canceled. Thus, a discrete gesture recognizer sends only a single action message
when it transitions from Possible to Recognized. A continuous gesture recognizer sends many action messages
as it changes states.
When a gesture recognizer reaches the Recognized (or Ended) state, it resets its state back to Possible. The
transition back to Possible does not trigger an action message.
When a view has multiple gesture recognizers attached to it, you may want to alter how the competing gesture
recognizers receive and analyze touch events. By default, there is no set order for which gesture recognizers
receive a touch first, and for this reason touches can be passed to gesture recognizers in a different order each
time. You can override this default behavior to:
● Specify that one gesture recognizer should analyze a touch before another gesture recognizer.
● Allow two gesture recognizers to operate simultaneously.
● Prevent a gesture recognizer from analyzing a touch.
Use the UIGestureRecognizer class methods, delegate methods, and methods overridden by subclasses
to effect these behaviors.
For your view to recognize both swipes and pans, you want the swipe gesture recognizer to analyze the touch
event before the pan gesture recognizer does. If the swipe gesture recognizer determines that a touch is a
swipe, the pan gesture recognizer never needs to analyze the touch. If the swipe gesture recognizer determines
that the touch is not a swipe, it moves to the Failed state and the pan gesture recognizer should begin analyzing
the touch event.
19
Gesture Recognizers
Defining How Gesture Recognizers Interact
You indicate this type of relationship between two gesture recognizers by calling the
requireGestureRecognizerToFail: method on the gesture recognizer that you want to delay, as in
Listing 1-6. In this listing, both gesture recognizers are attached to the same view.
Listing 1-6 Pan gesture recognizer requires a swipe gesture recognizer to fail
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib
[self.panRecognizer requireGestureRecognizerToFail:self.swipeRecognizer];
The requireGestureRecognizerToFail: method sends a message to the receiver and specifies some
otherGestureRecognizer that must fail before the receiving recognizer can begin. While it’s waiting for the other
gesture recognizer to transition to the Failed state, the receiving recognizer stays in the Possible state. If the
other gesture recognizer fails, the receiving recognizer analyzes the touch event and moves to its next state.
On the other hand, if the other gesture recognizer transitions to Recognized or Began, the receiving recognizer
moves to the Failed state. For information about state transitions, see “Gesture Recognizers Operate in a Finite
State Machine” (page 17).
Note: If your app recognizes both single and double taps and your single tap gesture recognizer
does not require the double tap recognizer to fail, then you should expect to receive single tap
actions before double tap actions, even when the user double taps. This behavior is intentional
because the best user experience generally enables multiple types of actions.
If you want these two actions to be mutually exclusive, your single tap recognizer must require the
double tap recognizer to fail. However, your single tap actions will lag a little behind the user’s input
because the single tap recognizer is delayed until the double tap recognizer fails.
20
Gesture Recognizers
Defining How Gesture Recognizers Interact
When a touch begins, if you can immediately determine whether or not your gesture recognizer should consider
that touch, use the gestureRecognizer:shouldReceiveTouch: method. This method is called every time
there is a new touch. Returning NO prevents the gesture recognizer from being notified that a touch occurred.
The default value is YES. This method does not alter the state of the gesture recognizer.
Listing 1-7 uses the gestureRecognizer:shouldReceiveTouch: delegate method to prevent a tap gesture
recognizer from receiving touches that are within a custom subview. When a touch occurs, the
gestureRecognizer:shouldReceiveTouch: method is called. It determines whether the user touched
the custom view, and if so, prevents the tap gesture recognizer from receiving the touch event.
- (void)viewDidLoad {
[super viewDidLoad];
self.tapGestureRecognizer.delegate = self;
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
shouldReceiveTouch:(UITouch *)touch {
return NO;
return YES;
If you need to wait as long as possible before deciding whether or not a gesture recognizer should analyze a
touch, use the gestureRecognizerShouldBegin: delegate method. Generally, you use this method if you
have a UIView or UIControl subclass with custom touch-event handling that competes with a gesture
recognizer. Returning NO causes the gesture recognizer to immediately fail, which allows the other touch
handling to proceed. This method is called when a gesture recognizer attempts to transition out of the Possible
state, if the gesture recognition would prevent a view or control from receiving a touch.
You can use the gestureRecognizerShouldBegin:UIView method if your view or view controller cannot
be the gesture recognizer’s delegate. The method signature and implementation is the same.
21
Gesture Recognizers
Defining How Gesture Recognizers Interact
Note: You need to implement a delegate and return YES on only one of your gesture recognizers
to allow simultaneous recognition. However, that also means that returning NO doesn’t necessarily
prevent simultaneous recognition because the other gesture recognizer's delegate could return YES.
[rotationGestureRecognizer canPreventGestureRecognizer:pinchGestureRecognizer];
and override the rotation gesture recognizer’s subclass method to return NO. For more information about how
to subclass UIGestureRecognizer, see “Creating a Custom Gesture Recognizer” (page 27).
22
Gesture Recognizers
Gesture Recognizers Interpret Raw Touch Events
If you have a custom subclass of one of these controls and you want to change the default action, attach a
gesture recognizer directly to the control instead of to the parent view. Then, the gesture recognizer receives
the touch event first. As always, be sure to read the iOS Human Interface Guidelines to ensure that your app
offers an intuitive user experience, especially when overriding the default behavior of a standard control.
An Event Contains All the Touches for the Current Multitouch Sequence
In iOS, a touch is the presence or movement of a finger on the screen. A gesture has one or more touches,
which are represented by UITouch objects. For example, a pinch-close gesture has two touches—two fingers
on the screen moving toward each other from opposite directions.
An event encompasses all touches that occur during a multitouch sequence. A multitouch sequence begins
when a finger touches the screen and ends when the last finger is lifted. As a finger moves, iOS sends touch
objects to the event. An multitouch event is represented by a UIEvent object of type UIEventTypeTouches.
Each touch object tracks only one finger and lasts only as long as the multitouch sequence. During the sequence,
UIKit tracks the finger and updates the attributes of the touch object. These attributes include the phase of
the touch, its location in a view, its previous location, and its timestamp.
23
Gesture Recognizers
Gesture Recognizers Interpret Raw Touch Events
The touch phase indicates when a touch begins, whether it is moving or stationary, and when it ends—that
is, when the finger is no longer touching the screen. As depicted in Figure 1-4, an app receives event objects
during each phase of any touch.
Note: A finger is less precise than a mouse pointer. When a user touches the screen, the area of
contact is actually elliptical and tends to be slightly lower than the user expects. This “contact patch”
varies based on the size and orientation of the finger, the amount of pressure, which finger is used,
and other factors. The underlying multitouch system analyzes this information for you and computes
a single touch point, so you don’t need to write your own code to do this.
Each of these methods is associated with a touch phase; for example, the touchesBegan:withEvent:
method is associated with UITouchPhaseBegan. The phase of a touch object is stored in its phase property.
24
Gesture Recognizers
Regulating the Delivery of Touches to Views
Note: These methods are not associated with gesture recognizer states, such as
UIGestureRecognizerStateBegan and UIGestureRecognizerStateEnded. Gesture recognizer
states strictly denote the phase of the gesture recognizer itself, not the phase of the touch objects
that are being recognized.
25
Gesture Recognizers
Regulating the Delivery of Touches to Views
For example, if you have a gesture recognizer for a discrete gesture that requires a two-fingered touch, this
translates to two separate touch objects. As the touches occur, the touch objects are passed from the app
object to the window object for the view where the touches occurred, and the following sequence occurs, as
depicted in Figure 1-6.
1. The window sends two touch objects in the Began phase—through the touchesBegan:withEvent:
method—to the gesture recognizer. The gesture recognizer doesn’t recognize the gesture yet, so its state
is Possible. The window sends these same touches to the view that the gesture recognizer is attached to.
2. The window sends two touch objects in the Moved phase—through the touchesMoved:withEvent:
method—to the gesture recognizer. The recognizer still doesn’t detect the gesture, and is still in state
Possible. The window then sends these touches to the attached view.
3. The window sends one touch object in the Ended phase—through the touchesEnded:withEvent:
method—to the gesture recognizer. This touch object doesn’t yield enough information for the gesture,
but the window withholds the object from the attached view.
4. The window sends the other touch object in the Ended phase. The gesture recognizer now recognizes its
gesture, so it sets its state to Recognized. Just before the first action message is sent, the view calls the
touchesCancelled:withEvent: method to invalidate the touch objects previously sent in the Began
and Moved phases. The touches in the Ended phase are canceled.
Now assume that the gesture recognizer in the last step decides that this multitouch sequence it’s been
analyzing is not its gesture. It sets its state to UIGestureRecognizerStateFailed. Then the window sends
the two touch objects in the Ended phase to the attached view in a touchesEnded:withEvent: message.
A gesture recognizer for a continuous gesture follows a similar sequence, except that it is more likely to
recognize its gesture before touch objects reach the Ended phase. Upon recognizing its gesture, it sets its state
to UIGestureRecognizerStateBegan (not Recognized). The window sends all subsequent touch objects
in the multitouch sequence to the gesture recognizer but not to the attached view.
26
Gesture Recognizers
Creating a Custom Gesture Recognizer
● delaysTouchesBegan (default of NO)—Normally, the window sends touch objects in the Began and
Moved phases to the view and the gesture recognizer. Setting delaysTouchesBegan to YES prevents
the window from delivering touch objects in the Began phase to the view. This ensures that when a gesture
recognizer recognizes its gesture, no part of the touch event was delivered to the attached view. Be
cautious when setting this property because it can make your interface feel unresponsive.
This setting provides a similar behavior to the delaysContentTouches property on UIScrollView; in
this case, when scrolling begins soon after the touch begins, subviews of the scroll-view object never
receive the touch, so there is no flash of visual feedback.
● delaysTouchesEnded (default of YES)—When this property is set toYES, it ensures that a view does not
complete an action that the gesture might want to cancel later. When a gesture recognizer is analyzing a
touch event, the window does not deliver touch objects in the Ended phase to the attached view. If a
gesture recognizer recognizes its gesture, the touch objects are canceled. If the gesture recognizer does
not recognize its gesture, the window delivers these objects to the view through a
touchesEnded:withEvent: message. Setting this property to NO allows the view to analyze touch
objects in the Ended phase at the same time as the gesture recognizer.
Consider, for example, that a view has a tap gesture recognizer that requires two taps, and the user double
taps the view. With the property set to YES, the view gets touchesBegan:withEvent:,
touchesBegan:withEvent:, touchesCancelled:withEvent:, and touchesCancelled:withEvent:.
If this property is set to NO, the view gets the following sequence of messages:
touchesBegan:withEvent:, touchesEnded:withEvent:, touchesBegan:withEvent:, and
touchesCancelled:withEvent:, which means that in touchesBegan:withEvent:, the view can
recognize a double tap.
If a gesture recognizer detects a touch that it determines is not part of its gesture, it can pass the touch directly
to its view. To do this, the gesture recognizer calls ignoreTouch:forEvent: on itself, passing in the touch
object.
#import <UIKit/UIGestureRecognizerSubclass.h>
Next, copy the following method declarations from UIGestureRecognizerSubclass.h to your header file;
these are the methods you override in your subclass:
27
Gesture Recognizers
Creating a Custom Gesture Recognizer
- (void)reset;
These methods have the same exact signature and behavior as the corresponding touch-event handling
methods described earlier in “An App Receives Touches in the Touch-Handling Methods” (page 24). In all of
the methods you override, you must call the superclass implementation, even if the method has a null
implementation.
This example has only a single view, but most apps have many views. In general, you should convert touch
locations to the screen’s coordinate system so that you can correctly recognize gestures that span multiple
views.
#import <UIKit/UIGestureRecognizerSubclass.h>
if ([touches count] != 1) {
self.state = UIGestureRecognizerStateFailed;
28
Gesture Recognizers
Creating a Custom Gesture Recognizer
return;
// strokeUp is a property
if (!self.strokeUp) {
self.midPoint = nowPoint;
self.strokeUp = YES;
} else {
self.state = UIGestureRecognizerStateFailed;
}
}
self.state = UIGestureRecognizerStateRecognized;
29
Gesture Recognizers
Creating a Custom Gesture Recognizer
self.midPoint = CGPointZero;
self.strokeUp = NO;
self.state = UIGestureRecognizerStateFailed;
State transitions for discrete and continuous gestures are different, as described in “Gesture Recognizers Operate
in a Finite State Machine” (page 17). When you create a custom gesture recognizer, you indicate whether it
is discrete or continuous by assigning it the relevant states. As an example, the checkmark gesture recognizer
in Listing 1-8 never sets the state to Began or Changed, because it’s discrete.
The most important thing you need to do when subclassing a gesture recognizer is to set the gesture recognizer’s
state accurately. iOS needs to know the state of a gesture recognizer in order for gesture recognizers to
interact as expected. For example, if you want to permit simultaneous recognition or require a gesture recognizer
to fail, iOS needs to understand the current state of your recognizer.
For more about creating custom gesture recognizers, see WWDC 2012: Building Advanced Gesture Recognizers .
Implement the reset method to reset any internal state so that your recognizer is ready for a new attempt
at recognizing a gesture, as in Listing 1-9. After a gesture recognizer returns from this method, it receives no
further updates for touches that are in progress.
- (void)reset {
[super reset];
self.midPoint = CGPointZero;
self.strokeUp = NO;
30
Event Delivery: The Responder Chain
When you design your app, it’s likely that you want to respond to events dynamically. For example, a touch
can occur in many different objects onscreen, and you have to decide which object you want to respond to a
given event and understand how that object receives the event.
When a user-generated event occurs, UIKit creates an event object containing the information needed to
process the event. Then it places the event object in the active app’s event queue. For touch events, that object
is a set of touches packaged in a UIEvent object. For motion events, the event object varies depending on
which framework you use and what type of motion event you are interested in.
An event travels along a specific path until it is delivered to an object that can handle it. First, the singleton
UIApplication object takes an event from the top of the queue and dispatches it for handling. Typically, it
sends the event to the app’s key window object, which passes the event to an initial object for handling. The
initial object depends on the type of event.
● Touch events. For touch events, the window object first tries to deliver the event to the view where the
touch occurred. That view is known as the hit-test view. The process of finding the hit-test view is called
hit-testing , which is described in “Hit-Testing Returns the View Where a Touch Occurred” (page 31).
● Motion and remote control events. With these events, the window object sends the shaking-motion or
remote control event to the first responder for handling. The first responder is described in “The Responder
Chain Is Made Up of Responder Objects” (page 33).
The ultimate goal of these event paths is to find an object that can handle and respond to an event. Therefore,
UIKit first sends the event to the object that is best suited to handle the event. For touch events, that object
is the hit-test view, and for other events, that object is the first responder. The following sections explain in
more detail how the hit-test view and first responder objects are determined.
31
Event Delivery: The Responder Chain
Hit-Testing Returns the View Where a Touch Occurred
To illustrate, suppose that the user touches view E in Figure 2-1. iOS finds the hit-test view by checking the
subviews in this order:
1. The touch is within the bounds of view A, so it checks subviews B and C.
2. The touch is not within the bounds of view B, but it’s within the bounds of view C, so it checks subviews
D and E.
3. The touch is not within the bounds of view D, but it’s within the bounds of view E.
View E is the lowest view in the view hierarchy that contains the touch, so it becomes the hit-test view.
The hitTest:withEvent: method returns the hit test view for a given CGPoint and UIEvent. The
hitTest:withEvent: method begins by calling the pointInside:withEvent: method on itself. If the
point passed into hitTest:withEvent: is inside the bounds of the view, pointInside:withEvent:
returns YES. Then, the method recursively calls hitTest:withEvent: on every subview that returns YES.
If the point passed into hitTest:withEvent: is not inside the bounds of the view, the first call to the
pointInside:withEvent: method returns NO, the point is ignored, and hitTest:withEvent: returns
nil. If a subview returns NO, that whole branch of the view hierarchy is ignored, because if the touch did not
occur in that subview, it also did not occur in any of that subview’s subviews. This means that any point in a
subview that is outside of its superview can’t receive touch events because the touch point has to be within
the bounds of the superview and the subview. This can occur if the subview’s clipsToBounds property is
set to NO.
32
Event Delivery: The Responder Chain
The Responder Chain Is Made Up of Responder Objects
Note: A touch object is associated with its hit-test view for its lifetime, even if the touch later moves
outside the view.
The hit-test view is given the first opportunity to handle a touch event. If the hit-test view cannot handle an
event, the event travels up that view’s chain of responders as described in “The Responder Chain Is Made Up
of Responder Objects” (page 33) until the system finds an object that can handle it.
A responder object is an object that can respond to and handle events. The UIResponder class is the base
class for all responder objects, and it defines the programmatic interface not only for event handling but also
for common responder behavior. Instances of the UIApplication, UIViewController, and UIView classes
are responders, which means that all views and most key controller objects are responders. Note that Core
Animation layers are not responders.
The first responder is designated to receive events first. Typically, the first responder is a view object. An object
becomes the first responder by doing two things:
1. Overriding the canBecomeFirstResponder method to return YES.
2. Receiving a becomeFirstResponder message. If necessary, an object can send itself this message.
Note: Make sure that your app has established its object graph before assigning an object to be
the first responder. For example, you typically call the becomeFirstResponder method in an
override of the viewDidAppear: method. If you try to assign the first responder in
viewWillAppear:, your object graph is not yet established, so the becomeFirstResponder
method returns NO.
Events are not the only objects that rely on the responder chain. The responder chain is used in all of the
following:
● Touch events. If the hit-test view cannot handle a touch event, the event is passed up a chain of responders
that starts with the hit-test view.
33
Event Delivery: The Responder Chain
The Responder Chain Follows a Specific Delivery Path
● Motion events. To handle shake-motion events with UIKit, the first responder must implement either the
motionBegan:withEvent: or motionEnded:withEvent: method of the UIResponder class, as
described in “Detecting Shake-Motion Events with UIEvent” (page 57).
● Remote control events. To handle remote control events, the first responder must implement the
remoteControlReceivedWithEvent: method of the UIResponder class.
● Action messages. When the user manipulates a control, such as a button or switch, and the target for the
action method is nil, the message is sent through a chain of responders starting with the control view.
● Editing-menu messages. When a user taps the commands of the editing menu, iOS uses a responder
chain to find an object that implements the necessary methods (such as cut:, copy:, and paste:). For
more information, see “Displaying and Managing the Edit Menu” and the sample code project, CopyPasteTile .
● Text editing. When a user taps a text field or a text view, that view automatically becomes the first
responder. By default, the virtual keyboard appears and the text field or text view becomes the focus of
editing. You can display a custom input view instead of the keyboard if it’s appropriate for your app. You
can also add a custom input view to any responder object. For more information, see “Custom Views for
Data Input”.
UIKit automatically sets the text field or text view that a user taps to be the first responder; Apps must explicitly
set all other first responder objects with the becomeFirstResponder method.
34
Event Delivery: The Responder Chain
The Responder Chain Follows a Specific Delivery Path
The responder chain sequence begins when iOS detects an event and passes it to an initial object, which is
typically a view. The initial view has the first opportunity to handle an event. Figure 2-2 shows two different
event delivery paths for two app configurations. An app’s event delivery path depends on its specific
construction, but all event delivery paths adhere to the same heuristics.
uikit_responder_chain.eps
Event Handling Guide for iOS
Apple, Inc. Figure 2-2 The responder chain on iOS
Application Application
Window Window
view
controller
For the app on the left, the event follows this path:
1. The initial view attempts to handle the event or message. If it can’t handle the event, it passes the event
to its superview, because the initial view is not the top most view in its view controller’s view hierarchy.
2. The superview attempts to handle the event. If the superview can’t handle the event, it passes the event
to its superview, because it is still not the top most view in the view hierarchy.
3. The topmost view in the view controller’s view hierarchy attempts to handle the event. If the topmost
view can’t handle the event, it passes the event to its view controller.
4. The view controller attempts to handle the event, and if it can’t, passes the event to the window.
5. If the window object can’t handle the event, it passes the event to the singleton app object.
6. If the app object can’t handle the event, it discards the event.
The app on the right follows a slightly different path, but all event delivery paths follow these heuristics:
1. A view passes an event up its view controller’s view hierarchy until it reaches the topmost view.
35
Event Delivery: The Responder Chain
The Responder Chain Follows a Specific Delivery Path
Important: If you implement a custom view to handle remote control events, action messages, shake-motion
events with UIKit, or editing-menu messages, don’t forward the event or message to nextResponder
directly to send it up the responder chain. Instead, invoke the superclass implementation of the current
event handling method and let UIKit handle the traversal of the responder chain for you.
36
Multitouch Events
Generally, you can handle almost all of your touch events with the standard controls and gesture recognizers
in UIKit. Gesture recognizers allow you to separate the recognition of a touch from the action that the touch
produces. In some cases, you want to do something in your app—such as drawing under a touch—where
there is no benefit to decoupling the touch recognition from the effect of the touch. If the view’s contents are
intimately related to the touch itself, you can handle touch events directly. You receive touch events when the
user touches your view, interpret those events based on their properties and then respond appropriately.
Subclass Why you might choose this subclass as your first responder
37
Multitouch Events
Implementing the Touch-Event Handling Methods in Your Subclass
Note: These methods have the same signature as the methods you override to create a custom
gesture recognizer, as discussed in “Creating a Custom Gesture Recognizer” (page 27).
Each of these touch methods correspond to a touch phase: Began, Moved, Ended, and Canceled. When there
are new or changed touches for a given phase, the app object calls one of these methods. Each method takes
two parameters: a set of touches and an event.
The set of touches is a set (NSSet) of UITouch objects, representing new or changed touches for that phase.
For example, when a touch transitions from the Began phase to the Moved phase, the app calls the
touchesMoved:withEvent: method. The set of touches passed in to the touchesMoved:withEvent:
method will now include this touch and all other touches in the Moved phase. The other parameter is an event
(UIEvent object) that includes all touch objects for the event. This differs from the set of touches because
some of the touch objects in the event may not have changed since the previous event message.
All views that process touches expect to receive a full touch-event stream, so when you create your subclass,
keep in mind the following rules:
● If your custom responder is a subclass of UIView or UIViewController, you should implement all of
the event handling methods.
● If you subclass any other responder class, you can have a null implementation for some of the event
methods.
● In all methods, be sure to call the superclass implementation of the method.
If you prevent a responder object from receiving touches for a certain phase of an event, the resulting behavior
may be undefined and probably undesirable.
38
Multitouch Events
Tracking the Phase and Location of a Touch Event
If a responder creates persistent objects while handling events, it should implement the
touchesCancelled:withEvent: method to dispose of those objects if the system cancels the sequence.
Cancellation occurs when an external event—for example, an incoming phone call—disrupts the current app’s
event processing. Note that a responder object should also dispose of any persistent objects when it receives
the last touchesEnded:withEvent: message for a multitouch sequence. See “Forwarding Touch Events” (page
51) to find out how to determine the last UITouchPhaseEnded touch object in a multitouch sequence.
A touch object stores phase information in the phase property, and each phase corresponds to one of the
touch event methods. A touch object stores location in three ways: the window in which the touch occurred,
the view within that window, and the exact location of the touch within that view. Figure 3-1 (page 39) shows
an example event with two touches in progress.
When a finger touches the screen, that touch is associated with both the underlying window and the view for
the lifetime of the event, even if the event is later passed to another view for handling. Use a touch’s location
information to determine how to respond to the touch. For example, if two touches occur in quick succession,
they are treated as a double tap only if they both occurred in the same view. A touch object stores both its
current location and its previous location, if there is one.
39
Multitouch Events
Retrieving and Querying Touch Objects
● The event object. The passed-in UIEvent object contains all of the touches for a given multitouch
sequence.
The multipleTouchEnabled property is set to NO by default, which means that a view receives only the first
touch in a multitouch sequence. When this property is disabled, you can retrieve a touch object by calling the
anyObject method on the set object because there is only one object in the set.
If you want to know the location of a touch, use the locationInView: method. By passing the parameter
self to this method, you get the location of the touch in the coordinate system of the receiving view. Similarly,
the previousLocationInView: method tells you the previous location of the touch. You can also determine
how many taps a touch has (tapCount), when the touch was created or last mutated (timestamp), and what
phase the touch is in (phase).
If you are interested in touches that have not changed since the last phase or that are in a different phase than
the touches in the passed-in set, you can find those in the event object. Figure 3-2 depicts an event object
that contains touch objects. To get all of these touch objects, call the allTouches method on the event object.
40
Multitouch Events
Retrieving and Querying Touch Objects
If you are interested only in touches associated with a specific window, call the touchesForWindow: method
of the UIEvent object. Figure 3-3 shows all the touches for window A.
If you want to get the touches associated with a specific view, call the touchesForView: method of the event
object. Figure 3-4 shows all the touches for view A.
41
Multitouch Events
Handling Tap Gestures
The best place to find this value is the touchesEnded:withEvent: method, because it corresponds to when
the user lifts a finger from a tap. By looking for the tap count in the touch-up phase—when the sequence has
ended—you ensure that the finger is really tapping and not, for instance, touching down and dragging. Listing
3-1 shows an example of how to determine whether a double tap occurred in one of your views.
if (aTouch.tapCount >= 2) {
[self respondToDoubleTapGesture:aTouch];
}
}
42
Multitouch Events
Handling Swipe and Drag Gestures
To answer these questions, store the touch’s initial location and compare its location as the touch moves.
Listing 3-2 shows some basic tracking methods you could use to detect horizontal swipes in a view. In this
example, a view has a startTouchPosition property that it uses to store a touch’s initial location. In the
touchesEnded: method, it compares the ending touch position to the starting location to determine if it is
a swipe. If the touch moves too far vertically or does not move far enough, it is not considered to be a swipe.
This example does not show the implementation for the myProcessRightSwipe: or myProcessLeftSwipe:
methods, but the custom view would handle the swipe gesture there.
#define HORIZ_SWIPE_DRAG_MIN 12
#define VERT_SWIPE_DRAG_MAX 4
// startTouchPosition is a property
43
Multitouch Events
Handling Swipe and Drag Gestures
} else {
self.startTouchPosition = CGPointZero;
self.startTouchPosition = CGPointZero;
Notice that this code does not check the location of the touch in the middle of the gesture, which means that
a gesture could go all over the screen but still be considered a swipe if its start and end points are in line. A
more sophisticated swipe gesture recognizer should also check middle locations in the
touchesMoved:withEvent: method. To detect swipe gestures in the vertical direction, you would use similar
code but would swap the x and y components.
Listing 3-3 shows an even simpler implementation of tracking a single touch, this time the user is dragging a
view around the screen. Here, the custom view class fully implements only the touchesMoved:withEvent:
method. This method computes a delta value between the touch’s current and previous locations in the view.
It then uses this delta value to reset the origin of the view’s frame.
myFrame.origin.x += deltaX;
44
Multitouch Events
Handling a Complex Multitouch Sequence
myFrame.origin.y += deltaY;
[self setFrame:myFrame];
When handling an event with multiple touches, you often store information about a touch’s state so that you
can compare touches later. As an example, say you want to compare the final location of each touch with its
original location. In the touchesBegan:withEvent: method, you get the original location of each touch
from the locationInView: method and store those in a CFDictionaryRef object using the addresses of
the UITouch objects as keys. Then, in the touchesEnded:withEvent: method, you can use the address of
each passed-in touch object to get the object’s original location and compare it with its current location.
Important: Use a CFDictionaryRef data type rather than an NSDictionary object to track touches,
because NSDictionary copies its keys. The UITouch class does not adopt the NSCopying protocol, which
is required for object copying.
Listing 3-4 illustrates how to store the starting locations of UITouch objects in a Core Foundation dictionary.
The cacheBeginPointForTouches: method stores the location of each touch in the superview’s coordinates
so that it has a common coordinate system to compare the location of all of the touches.
45
Multitouch Events
Handling a Complex Multitouch Sequence
[self cacheBeginPointForTouches:touches];
- (void)cacheBeginPointForTouches:(NSSet *)touches {
if (point == NULL) {
point = (CGPoint *)malloc(sizeof(CGPoint));
Listing 3-5 builds on the previous example. It illustrates how to retrieve the initial locations from the dictionary.
Then, it gets the current locations of the same touches so that you can use these values to compute an affine
transformation (not shown).
- (CGAffineTransform)incrementalTransformWithTouches:(NSSet *)touches {
46
Multitouch Events
Handling a Complex Multitouch Sequence
The next example, Listing 3-6, does not use a dictionary to track touch mutations; however, it handles multiple
touches during an event. It shows a custom UIView object responding to touches by animating the movement
of a “Welcome” placard as a finger moves it around the screen. It also changes the language of the placard
when the user double taps. This example comes from the MoveMe sample code project, which you can examine
to get a better understanding of the event handling context.
// Move the placard view only if the touch was in the placard view
if ([touch tapCount] == 2) {
[placardView setupNextDisplayString];
return;
47
Multitouch Events
Handling a Complex Multitouch Sequence
[self animateFirstTouchAtPoint:touchPoint];
// If the touch was in the placardView, move the placardView to its location
placardView.center = location;
self.userInteractionEnabled = NO;
[self animatePlacardViewToCenter];
48
Multitouch Events
Specifying Custom Touch Event Behavior
placardView.center = self.center;
placardView.transform = CGAffineTransformIdentity;
To find out when the last finger in a multitouch sequence is lifted from a view, see how many touch objects
are in the passed-in set and how many are in the passed-in UIEvent object. If the number is the same, then
the multitouch sequence has concluded. Listing 3-7 illustrates how to do this in code.
Listing 3-7 Determining when the last touch in a multitouch sequence has ended
- (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
Remember that a passed-in set contains all touch objects associated with the view that are new or changed
for a given phase, whereas the touch objects returned from the touchesForView: method includes all
objects associated with the specified view.
49
Multitouch Events
Specifying Custom Touch Event Behavior
● Restrict event delivery to a single view. By default, a view’s exclusiveTouch property is set to NO,
which means that one view does not block other views in a window from receiving touches. If you set the
property to YES for a specific view, then that view receives touches if—and only if—it is the only view
tracking touches.
If your views are nonexclusive, a user can touch one finger in one view and another finger in another view,
and each view can track its touch simultaneously. Now imagine that you have views set up as in Figure
3-5 and that view A is an exclusive-touch view. If the user touches inside A, it recognizes the touch. But if
a user holds one finger inside view B and also touches inside view A, then view A does not receive the
touch because it was not the only view tracking touches. Similarly, if a user holds one finger inside view
A and also touches inside view B, then view B does not receive the touch because view A is the only view
tracking touches. At any time, the user can still touch both B and C, and those views can track their touches
simultaneously.
● Restrict event delivery to subviews. A custom UIView class can override hitTest:withEvent: so that
multitouch events are not delivered to a specific subview. See “Intercepting Touches by Overriding
Hit-Testing” (page 51) for a discussion of this technique.
You can also turn off touch-event delivery completely, or just for a period of time:
● Turn off delivery of touch events. Set a view’s userInteractionEnabled property to NO to turn off
delivery of touch events. Note that a view also does not receive touch events if it’s hidden or transparent.
50
Multitouch Events
Intercepting Touches by Overriding Hit-Testing
● Turn off delivery of touch events for a period. Sometimes you want to temporarily turn off event
delivery—for example, while your code is performing animations. Your app can call the
beginIgnoringInteractionEvents method to stop receiving touch events, and then later call the
endIgnoringInteractionEvents method to resume touch-event delivery.
Overriding hit-testing ensures that the superview receives all touches because, by setting itself as the hit-test
view, the superview intercepts and receives touches that are normally passed to the subview first. If a superview
does not override hitTest:withEvent:, touch events are associated with the subviews where they first
occurred and are never sent to the superview.
Recall that there are two hit-test methods: the hitTest:withEvent: method of views and the hitTest:
method of layers, as described in “Hit-Testing Returns the View Where a Touch Occurred” (page 31). You rarely
need to call these methods yourself. It’s more likely that you will override them to intercept touch events from
subviews. However, sometimes responders perform hit-testing prior to event forwarding (see “Forwarding
Touch Events” (page 51)).
For example, let’s say an app has three custom views: A, B, and C. When the user touches view A, the app’s
window determines that it is the hit-test view and sends to it the initial touch event. Depending on certain
conditions, view A forwards the event to either view B or view C. In this case, views A, B, and C must be aware
of this forwarding, and views B and C must be able to deal with touches that are not bound to them.
Event forwarding often requires analyzing touch objects to determine where they should be forwarded. There
are a couple of approaches you can take for this analysis:
51
Multitouch Events
Forwarding Touch Events
● With an “overlay” view, such as a common superview, use hit-testing to intercept events for analysis prior
to forwarding them to subviews (see “Intercepting Touches by Overriding Hit-Testing” (page 51)).
● Override sendEvent: in a custom subclass of UIWindow, analyze touches, and forward them to the
appropriate responders.
Overriding the sendEvent: method allows you to monitor the events your app receives. Both the
UIApplication object and each UIWindow object dispatch events in the sendEvent: method, so this
method serves as a funnel point for events coming in to an app. This is something that very few apps need to
do and, if you do override sendEvent:, be sure to invoke the superclass implementation—[super
sendEvent:theEvent]. Never tamper with the distribution of events.
Listing 3-8 illustrates this technique in a subclass of UIWindow. In this example, events are forwarded to a
custom helper responder that performs affine transformations on the view that it is associated with.
- (void)sendEvent:(UIEvent *)event {
// Collect all the touches you care about from the event
case UITouchPhaseBegan:
[began addObject:touch];
break;
case UITouchPhaseMoved:
[moved addObject:touch];
break;
case UITouchPhaseEnded:
52
Multitouch Events
Best Practices for Handling Multitouch Events
[ended addObject:touch];
break;
case UITouchPhaseCancelled:
[canceled addObject:touch];
break;
default:
break;
}
// Call methods to handle the touches
[super sendEvent:event];
Notice that the overriding subclass invokes the superclass implementation of the sendEvent: method. This
is important to the integrity of the touch-event stream.
53
Multitouch Events
Best Practices for Handling Multitouch Events
54
Motion Events
Users generate motion events when they move, shake, or tilt the device. These motion events are detected
by the device hardware, specifically, the accelerometer and the gyroscope.
The accelerometer is actually made up of three accelerometers, one for each axis—x, y, and z. Each one
measures changes in velocity over time along a linear path. Combining all three accelerometers lets you detect
device movement in any direction and get the device’s current orientation. Although there are three
accelerometers, the remainder of this document refers to them as a single entity. The gyroscope measures
the rate of rotation around the three axes.
All motion events originate from the same hardware. There are several different ways that you can access that
hardware data, depending on your app’s needs:
● If you need to detect the general orientation of a device, but you don’t need to know the orientation
vector, use the UIDevice class. See “Getting the Current Device Orientation with UIDevice” (page 55) for
more information.
● If you want your app to respond when a user shakes the device, you can use the UIKit motion-event
handling methods to get information from the passed-in UIEvent object. See “Detecting Shake-Motion
Events with UIEvent” (page 57) for more information.
● If neither the UIDevice nor the UIEvent classes are sufficient, it’s likely you’ll want to use the Core Motion
framework to access the accelerometer, gyroscope, and device motion classes. See “Capturing Device
Movement with Core Motion” (page 59) for more information.
Before you can get the current orientation, you need to tell the UIDevice class to begin generating device
orientation notifications by calling the beginGeneratingDeviceOrientationNotifications method.
This turns on the accelerometer hardware, which may be off to conserve battery power. Listing 4-1 demonstrates
this in the viewDidLoad method.
55
Motion Events
Getting the Current Device Orientation with UIDevice
After enabling orientation notifications, get the current orientation from the orientation property of the
UIDevice object. If you want to be notified when the device orientation changes, register to receive
UIDeviceOrientationDidChangeNotification notifications. The device orientation is reported using
UIDeviceOrientation constants, indicating whether the device is in landscape mode, portrait mode,
screen-side up, screen-side down, and so on. These constants indicate the physical orientation of the device
and don’t necessarily correspond to the orientation of your app’s user interface.
When you no longer need to know the orientation of the device, always disable orientation notifications by
calling the UIDevice method, endGeneratingDeviceOrientationNotifications. This gives the system
the opportunity to disable the accelerometer hardware if it’s not being used elsewhere, which preserves battery
power.
-(void) viewDidLoad {
- (void)orientationChanged:(NSNotification *)notification {
-(void) viewDidDisappear {
For another example of responding to UIDevice orientation changes, see the AlternateViews sample code
project.
56
Motion Events
Detecting Shake-Motion Events with UIEvent
Motion events are simpler than touch events. The system tells an app when a motion starts and stops, but not
when each individual motion occurs. And, motion events include only an event type (UIEventTypeMotion),
event subtype (UIEventSubtypeMotionShake), and timestamp.
- (BOOL)canBecomeFirstResponder {
return YES;
- (void)viewDidAppear:(BOOL)animated {
[self becomeFirstResponder];
Motion events use the responder chain to find an object that can handle the event. When the user starts
shaking the device, iOS sends the first motion event to the first responder. If the first responder doesn’t handle
the event, it progresses up the responder chain. See “The Responder Chain Follows a Specific Delivery Path” (page
34) for more information. If a shaking-motion event travels up the responder chain to the window without
being handled and the applicationSupportsShakeToEdit property of UIApplication is set to YES (the
default), iOS displays a sheet with Undo and Redo commands.
57
Motion Events
Setting and Checking Required Hardware Capabilities for Motion Events
responder should also implement the motionCancelled:withEvent: method to respond when iOS cancels
a motion event. An event is canceled if the shake motion is interrupted or if iOS determines that the motion
is not valid after all—for example, if the shaking lasts too long.
Listing 4-3 is extracted from the sample code project, GLPaint . In this app, the user paints on the screen, and
then shakes the device to erase the painting. This code detects whether a shake has occurred in the
motionEnded:withEvent: method, and if it has, posts a notification to perform the shake-to-erase
functionality.
if (motion == UIEventSubtypeMotionShake)
Note: Besides its simplicity, another reason to consider using shake-motion events, instead of Core
Motion, is that you can simulate shake-motion events in iOS Simulator as you test and debug your
app. For more information about iOS Simulator, see iOS Simulator User Guide .
Declare your app’s required capabilities by adding keys to your app’s property list. There are two
UIRequiredDeviceCapabilities keys for motion events, based on hardware source:
● accelerometer
● gyroscope
58
Motion Events
Capturing Device Movement with Core Motion
Note: You don’t need to include the accelerometer key if your app detects only device orientation
changes.
You can use either an array or a dictionary to specify the key-values. If you use an array, list each required
feature as a key in the array. If you use a dictionary, specify a Boolean value for each required key in the
dictionary. In both cases, not listing a key for a feature indicates that the feature is not required. For more
information, see “UIRequiredDeviceCapabilities” in Information Property List Key Reference .
If the features of your app that use gyroscope data are not integral to the user experience, you might want to
allow users with non-gyroscope devices to download your app. If you do not make gyroscope a required
hardware capability, but still have code that requests gyroscope data, you need to check whether the gyroscope
is available at runtime. You do this with the gyroAvailable property of the CMMotionManager class.
Core Motion is distinct from UIKit. It is not connected with the UIEvent model and does not use the responder
chain. Instead, Core Motion simply delivers motion events directly to apps that request them.
Core Motion events are represented by three data objects, each encapsulating one or more measurements:
● A CMAccelerometerData object captures the acceleration along each of the spatial axes.
● A CMGyroData object captures the rate of rotation around each of the three spatial axes.
● A CMDeviceMotion object encapsulates several different measurements, including attitude and more
useful measurements of rotation rate and acceleration.
The CMMotionManager class is the central access point for Core Motion. You create an instance of the class,
specify an update interval, request that updates start, and handle motion events as they are delivered. An app
should create only a single instance of the CMMotionManager class. Multiple instances of this class can affect
the rate at which an app receives data from the accelerometer and gyroscope.
All of the data-encapsulating classes of Core Motion are subclasses of CMLogItem, which defines a timestamp
so that motion data can be tagged with a time and logged to a file. An app can compare the timestamp of
motion events with earlier motion events to determine the true update interval between events.
For each of the data-motion types described, the CMMotionManager class offers two approaches for obtaining
motion data:
59
Motion Events
Capturing Device Movement with Core Motion
● Pull. An app requests that updates start and then periodically samples the most recent measurement of
motion data.
● Push. An app specifies an update interval and implements a block for handling the data. Then, it requests
that updates start, and passes Core Motion an operation queue and the block. Core Motion delivers each
update to the block, which executes as a task in the operation queue.
Pull is the recommended approach for most apps, especially games. It is generally more efficient and requires
less code. Push is appropriate for data-collection apps and similar apps that cannot miss a single sample
measurement. Both approaches have benign thread-safety effects; with push, your block executes on the
operation-queue’s thread whereas with pull, Core Motion never interrupts your threads.
Important: With Core Motion, you have to test and debug your app on a device. There is no support in
iOS Simulator for accelerometer or gyroscope data.
Always stop motion updates as soon as your app finishes processing the necessary data. As a result, Core
Motion can turn off motion sensors, which saves battery power.
30–60 Suitable for games and other apps that use the accelerometer for real-time
user input.
70–100 Suitable for apps that need to detect high-frequency motion. For example,
you might use this interval to detect the user hitting the device or shaking it
very quickly.
You can set the reporting interval to be as small as 10 milliseconds (ms), which corresponds to a 100 Hz update
rate, but most app operate sufficiently with a larger interval.
60
Motion Events
Capturing Device Movement with Core Motion
Figure 4-1 The accelerometer measures velocity along the x, y, and z axes
To start receiving and handling accelerometer data, create an instance of the CMMotionManager class and
call one of the following methods:
● startAccelerometerUpdates—the pull approach
After you call this method, Core Motion continually updates the accelerometerData property of
CMMotionManager with the latest measurement of accelerometer activity. Then, you periodically sample
this property, usually in a render loop that is common in games. If you adopt this polling approach, set
the update-interval property (accelerometerUpdateInterval) to the maximum interval at which Core
Motion performs updates.
● startAccelerometerUpdatesToQueue:withHandler:—the push approach
Before you call this method, assign an update interval to the accelerometerUpdateInterval property,
create an instance of NSOperationQueue and implement a block of type CMAccelerometerHandler
that handles the accelerometer updates. Then, call the
startAccelerometerUpdatesToQueue:withHandler: method on the motion-manager object,
passing in the operation queue and the block. At the specified update interval, Core Motion passes the
latest sample of accelerometer activity to the block, which executes as a task in the queue.
61
Motion Events
Capturing Device Movement with Core Motion
Listing 4-4 is extracted from the MotionGraphs sample code project, which you can examine for more context.
In this app, the user moves a slider to specify an update interval. The startUpdatesWithSliderValue:
method uses the slider value to compute the new update interval. Then, it creates an instance of the
CMMotionManager class, checks to make sure that the device has an accelerometer, and assigns the update
interval to the motion manager. This app uses the push approach to retrieve accelerometer data and plot it
on a graph. Note that it stops accelerometer updates in the stopUpdates method.
- (void)startUpdatesWithSliderValue:(int)sliderValue {
// Create a CMMotionManager
[mManager setAccelerometerUpdateInterval:updateInterval];
[weakSelf.graphView addX:accelerometerData.acceleration.x
y:accelerometerData.acceleration.y z:accelerometerData.acceleration.z];
[weakSelf setLabelValueX:accelerometerData.acceleration.x
y:accelerometerData.acceleration.y z:accelerometerData.acceleration.z];
}];
}
62
Motion Events
Capturing Device Movement with Core Motion
- (void)stopUpdates {
[mManager stopAccelerometerUpdates];
Figure 4-2 The gyroscope measures rotation around the x, y, and z axes
63
Motion Events
Capturing Device Movement with Core Motion
Each time you request a gyroscope update, Core Motion takes a biased estimate of the rate of rotation and
returns this information in a CMGyroData object. CMGyroData has a rotationRate property that stores a
CMRotationRate structure, which captures the rotation rate for each of the three axes in radians per second.
Note that the rotation rate measured by a CMGyroData object is biased. You can get a much more accurate,
unbiased measurement by using the CMDeviceMotion class. See “Handling Processed Device Motion
Data” (page 65) for more information.
When analyzing rotation-rate data—specifically, when analyzing the fields of the CMRotationMatrix
structure—follow the “right-hand rule” to determine the direction of rotation, as shown in Figure 4-2. For
example, if you wrap your right hand around the x-axis such that the tip of the thumb points toward positive
x, a positive rotation is one toward the tips of the other four fingers. A negative rotation goes away from the
tips of those fingers.
To start receiving and handling rotation-rate data, create an instance of the CMMotionManager class and call
one of the following methods:
● startGyroUpdates—the pull approach
After you call this method, Core Motion continually updates the gyroData property of CMMotionManager
with the latest measurement of gyroscope activity. Then, you periodically sample this property. If you
adopt this polling approach, set the update-interval property (gyroUpdateInterval) to the maximum
interval at which Core Motion performs updates.
● startGyroUpdatesToQueue:withHandler:—the push approach
Before you call this method, assign an update interval to the gyroUpdateInterval property, create an
instance of NSOperationQueue, and implement a block of type CMGyroHandler that handles the
gyroscope updates. Then, call the startGyroUpdatesToQueue:withHandler: method on the
motion-manager object, passing in the operation queue and the block. At the specified update interval,
Core Motion passes the latest sample of gyroscope activity to the block, which executes as a task in the
queue.
Listing 4-5 demonstrates this approach.
Listing 4-5 is also extracted from the MotionGraphs sample code project, and is nearly identical to Listing 4-4.
The app uses the push approach to retrieve gyroscope data so that it can plot the data onscreen.
- (void)startUpdatesWithSliderValue:(int)sliderValue {
64
Motion Events
Capturing Device Movement with Core Motion
// Create a CMMotionManager
[mManager setGyroUpdateInterval:updateInterval];
[weakSelf.graphView addX:gyroData.rotationRate.x
y:gyroData.rotationRate.y z:gyroData.rotationRate.z];
[weakSelf setLabelValueX:gyroData.rotationRate.x
y:gyroData.rotationRate.y z:gyroData.rotationRate.z];
}];
- (void)stopUpdates{
CMMotionManager *mManager = [(APLAppDelegate *)[[UIApplication
sharedApplication] delegate] sharedManager];
[mManager stopGyroUpdates];
65
Motion Events
Capturing Device Movement with Core Motion
and the user-generated acceleration. An instance of the CMDeviceMotion class encapsulates all of this data.
Additionally, you do not need to filter the acceleration data because device-motion separates gravity and user
acceleration.
You can access attitude data through a CMDeviceMotion object’s attitude property, which encapsulates
a CMAttitude object. Each instance of the CMAttitude class encapsulates three mathematical representations
of attitude:
● a quaternion
● a rotation matrix
● the three Euler angles (roll, pitch, and yaw)
To start receiving and handling device-motion updates, create an instance of the CMMotionManager class
and call one of the following two methods on it:
● startDeviceMotionUpdates—the pull approach
After you call this method, Core Motion continuously updates the deviceMotion property of
CMMotionManager with the latest refined measurements of accelerometer and gyroscope activity, as
encapsulated in a CMDeviceMotion object. Then, you periodically sample this property. If you adopt this
polling approach, set the update-interval property (deviceMotionUpdateInterval) to the maximum
interval at which Core Motion performs updates.
Listing 4-6 illustrates this approach.
● startDeviceMotionUpdatesToQueue:withHandler:—the push approach
Before you call this method, assign an update interval to the deviceMotionUpdateInterval property,
create an instance of NSOperationQueue, and implement a block of the CMDeviceMotionHandler
type that handles the accelerometer updates. Then, call the
startDeviceMotionUpdatesToQueue:withHandler: method on the motion-manager object, passing
in the operation queue and the block. At the specified update interval, Core Motion passes the latest
sample of combined accelerometer and gyroscope activity, as represented by a CMDeviceMotion object,
to the block, which executes as a task in the queue.
Listing 4-6 uses code from the pARk sample code project to demonstrate how to start and stop device motion
updates. The startDeviceMotion method uses the pull approach to start device updates with a reference
frame. See “Device Attitude and the Reference Frame” (page 67) for more about device motion reference
frames.
66
Motion Events
Capturing Device Movement with Core Motion
- (void)startDeviceMotion {
// Create a CMMotionManager
motionManager.showsDeviceMovementDisplay = YES;
- (void)stopDeviceMotion {
[motionManager stopDeviceMotionUpdates];
In the Core Motion reference frame, the z-axis is always vertical, and the x- and y-axis are always orthogonal
to gravity, which makes the gravity vector [0, 0, -1]. This is also known as the gravity reference. If you multiply
the rotation matrix obtained from a CMAttitude object by the gravity reference, you get gravity in the device's
frame. Or, mathematically:
You can change the reference frame that CMAttitude uses. To do that, cache the attitude object that contains
the reference frame and pass it as the argument to multiplyByInverseOfAttitude:. The attitude argument
receiving the message changes so that it represents the change in attitude from the passed-in reference frame.
67
Motion Events
Capturing Device Movement with Core Motion
Most apps are interested in the change in device attitude. To see how this might be useful, consider a baseball
game where the user rotates the device to swing. Normally, at the beginning of a pitch, the bat would be at
some resting orientation. After that, the bat is rendered based on how the device's attitude changed from the
start of a pitch. Listing 4-7 illustrates how you might do this.
-(void) startPitch {
// referenceAttitude is a property
self.referenceAttitude = self.motionManager.deviceMotion.attitude;
- (void)drawView {
[self updateModelsWithAttitude:currentAttitude];
[renderer render];
68
Remote Control Events
Remote control events let users control an app’s multimedia. If your app plays audio or video content, you
might want it to respond to remote control events that originate from either transport controls or external
accessories. (External accessories must conform to Apple-provided specifications.) iOS converts commands
into UIEvent objects and delivers the events to an app. The app sends them to the first responder and, if the
first responder doesn’t handle them, they travel up the responder chain. For more information about the
responder chain, see “The Responder Chain Follows a Specific Delivery Path” (page 34).
This chapter describes how to receive and handle remote control events. The code examples are taken from
the Audio Mixer (MixerHost) sample code project.
To make itself capable of becoming first responder, the view or view controller should override the
canBecomeFirstResponder method of the UIResponder class to return YES. It should also send itself the
becomeFirstResponder method at an appropriate time. For example, a view controller might use the
becomeFirstResponder method in an override of the viewDidAppear: method, as in Listing 5-1. This
example also shows the view controller “turning on” the delivery of remote control events by calling the
beginReceivingRemoteControlEvents method of UIApplication.
- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
69
Remote Control Events
Handling Remote Control Events
[self becomeFirstResponder];
When the view or view controller is no longer managing audio or video, it should turn off the delivery of remote
control events. It should also resign first-responder status in the viewWillDisappear: method, as shown in
Listing 5-2.
- (void)viewWillDisappear:(BOOL)animated {
[self resignFirstResponder];
[super viewWillDisappear:animated];
70
Remote Control Events
Testing Remote Control Events on a Device
- (void)remoteControlReceivedWithEvent:(UIEvent *)receivedEvent {
if (receivedEvent.type == UIEventTypeRemoteControl) {
switch (receivedEvent.subtype) {
case UIEventSubtypeRemoteControlTogglePlayPause:
break;
case UIEventSubtypeRemoteControlPreviousTrack:
break;
case UIEventSubtypeRemoteControlNextTrack:
break;
default:
break;
71
Remote Control Events
Testing Remote Control Events on a Device
For testing purposes, you can programmatically make your app begin audio playback and then test the remote
control events by tapping the Now Playing Controls. Note that a deployed app should not programmatically
begin playback; that should always be user-controlled.
72
Document Revision History
This table describes the changes to Event Handling Guide for iOS .
Date Notes
2013-01-28 Reordered the chapters to present gesture recognizers first and conceptual
information about event delivery and the responder chain later. Also
updated content to include the most recent information for iOS 6.0.
2010-07-09 Changed the title from "Event Handling Guide for iPhone OS" and changed
"iPhone OS" to "iOS" throughout. Updated the section on the Core Motion
framework.
2010-05-18 First version of a document that describes how applications can handle
multitouch, motion, and other events.
73
Apple Inc.
Copyright © 2013 Apple Inc.
All rights reserved.