Quantcast
Channel: Applidium
Viewing all 64 articles
Browse latest View live

Core Data Features in iOS 8 (Part 2)

$
0
0

In our last blog post about Core Data features in iOS 8, we have learned about NSBatchUpdateRequest. This time, we will dig into another new feature: asynchronous fetch requests.

Asynchronous fetch requests

The second feature Apple introduced in Core Data in iOS 8, is the possibility to execute fetch requests asynchronously and therefore to schedule a time consuming fetch in the background. For that, another subclass of NSPersistentStoreRequest has been introduced: NSAsynchronousFetchRequest.

Fetching objects can be time consuming for different reasons: numerous records, complicated predicates, sort descriptors and so on. If this long operation is executed in the foreground, the main thread, also responsible for the UI, will block and the application won’t be responsive anymore. With background fetch requests, the fetch can be dispatched to a dedicated queue in the background, keeping the main thread safe and your UI responsive.

At this point, one can argue that using GCD to dispatch a regular fetch request in the background is just as good. There are however two reasons why it is not :

  • First and foremost, it is a pain to do with GCD. You have to instanciate a new NSManagedObjectContext object in the background thread, create a NSFetchRequest to fetch the ids of the objects you want, pass the ids back to the main thread and finally retrieve the NSManagedObjects one by one to use them.
  • The second reason is that asynchronous fetch requests allow, under certain conditions, to track the progress of the operation or to cancel it. More on that later.

In practice

Let’s see how to use NSAsynchronousFetchRequest with an example. We will once again assume our database contains a substantial set of articles, that we will now want to query and order by title. This can be quite the heavy task depending of the number of items fetched.

First, the NSFetchRequest is created as usual.

// The fetch request we would normally use
NSFetchRequest*fetchRequest=[NSFetchRequestfetchRequestWithEntityName:@"Article"];fetchRequest.sortDescriptors=@[[NSSortDescriptorsortDescriptorWithKey:@"title"ascending:YES]];

Then, instead of calling [NSManagedObjectContext executeFetchRequest:error:], the fetch request is wrapped in an NSAsynchronousFetchRequest.

NSAsynchronousFetchRequest*asynchronousFetchRequest=[[NSAsynchronousFetchRequestalloc]initWithFetchRequest:fetchRequestcompletionBlock:^(NSAsynchronousFetchResult*result){if(result.operationError){/* Handle the error */}else{NSArray*articles=result.finalResult;dispatch_async(dispatch_get_main_queue(),^{/* update your UI on main thread */});}}];

Note that the fetched articles are now available in the completion block, but any update to the UI must be dispatched to the main queue.

Finally, the method [NSManagedObjectContext executeRequest:error:] starts the asynchronous request and returns an NSPersistentStoreAsynchronousResult.

NSPersistentStoreAsynchronousResult*result=(NSPersistentStoreAsynchronousResult*)[self.managedObjectContextexecuteRequest:asynchronousFetchRequesterror:NULL];

Note: to use asynchronous fetch request, be careful to change the way you initialize the NSManagedObjectContext in the Core Data stack if you don’t want to run into the following exception:

NSConfinementConcurrencyTypecontext<NSManagedObjectContext:0x...>cannotsupportasynchronousfetchrequest<NSAsynchronousFetchRequest:0x...>withfetchrequest<NSFetchRequest:0x...>

You have to use the dedicated initializer [NSManagedObjectContext initWithConcurrencyType:] with one of the following types:

  • NSConfinementConcurrencyType is the default value. It does not allow asynchronous fetch requests and has been labeled obsolete by Apple. It should be avoided.
  • NSPrivateQueueConcurrencyType to schedule work on a private queue
  • NSMainQueueConcurrencyType to schedule work in the main queue

Track the progress

The advantage of NSAsynchronousFetchRequest is the ability to track the progress of the background fetch operation.

The executeRequest method used to start the asynchronous fetch request returns a NSPersistentStoreAsynchronousResult right away. This result has a NSProgress * progress property. This property has two (undocumented) purposes: the object itself can help track the current progress of the asynchronous request, or stop it completely using the cancel method.

To track the progress, you first have to provide a value to the estimatedResultCount property of your fetch request. Otherwise Core Data will not be aware of the number of expected rows and will not be able to estimate the progress. Then create a parent NSProgress and register as an observer for the key @"fractionCompleted" of the child’s progress (the one from the fetch result).

// Create the parent progress
NSProgress*parentProgress=[NSProgressprogressWithTotalUnitCount:1];[parentProgressbecomeCurrentWithPendingUnitCount:1];//
/* ... execute the asynchronous fetch request and get the result *///
// Register as observer for key @"fractionCompleted" for the fetch progress
[result.progressaddObserver:selfforKeyPath:@"fractionCompleted"options:NSKeyValueObservingOptionInitialcontext:NULL];

What happens under the hood is that each time a new record is fetched, Core Data divides the count of fetched objects by the estimatedResultCount to update the fractionCompleted property of the progress like so:

fractionCompleted=objectsFetchedCount/estimatedResultCount

Notice that if the estimatedResultCount is too low, meaning that there are more objects to fetch than we previously thought, fractionCompleted can be greater than 100%. That’s why Core Data updates internally the value of the estimate result count as sketched below:

while(!finished){/* ... fetch objects ... */if(fetchedObjectsCount>=estimatedResultCount){estimatedResultCount=2*estimatedResultCount;}fractionCompleted=fetchedObjectsCount/estimatedResultCount;/* ... pass the fractionCompleted to the progress ... */}

Beware that an incorrect estimatedResultCount can spoil your progress view. If too low, it will reach 100%, then be set back to 50% (as the estimated value is doubled) and so forth. If too high, it may well stick at 10% for a few seconds, and instantly reach 100%, defeating its purpose of sending back meaningful informations to the user. If you are unable to provide an accurate estimate, you can simply display an UIActivityIndicator which would still inform the user of a background task in progress.

Conclusion

As seen previously, Apple’s iOS8 provides a built-in way to perform time-consuming NSFetchRequests in the background. Additionally, if you have a precise idea of the number of objects that will be returned from a fetch request, there is an opportunity to track and display its progress to the final user.

We do hope there will be more documentation available soon, and in the meantime you can send us feedback or comments on twitter.


Understand TCP under poor network coverage

$
0
0

When we use our smartphone in a poorly covered network area, like most of the Parisian subway lines, the following scenario happens quite often: we try to load a given resource (web page, json, etc.) and get nothing but an endless loading animation. However, after several tries, the resource just suddenly appears.

We tried to understand why the resource was not loaded the first time despite a network that seemed functional. To do so we performed a low level study on TCP, the protocol used by most requests.

Concept

We developed a client/server system to compare TCP and UDP performances (meaning download rate, latency and loss rate).

Comparison

Measuring the TCP efficiency over the “subway’s 2G cellular network” communication channel comes down to comparing its performances with the best the channel can offer; this is why we used UDP. Indeed, as UDP does not have any flow control or data reliability, this is possible to evaluate the channel throughput by observing the packets transmision.

Measure

Using UDP, it is possible to get some information about the communication channel: as there is no flow control or data reliability, flooding the network with a higher rate than the channel can handle, gives us a measure of its download rate. Meanwhile, with a low rate, observing the received segment numbers let us know the loss rate. Moreover, with a received packet number loop between the client and the server, we can measure the latency. The same information is available at socket level for the TCP protocol.

The server is a simple program, written in C, using Berkeley sockets; while the client is an iOS application, also using Berkeley sockets.

Results

Thus we were able, with two identical iPhones, over the same cellular operator network, to make several measurements inside the Parisian subway, and then compare both protocol behaviors.

We made the following observations:

  • TCP download follows the same trend than the UDP one, but it is slowed during times (which can last up to several minutes) when download simply stops.
  • Latency is very variable; it goes from “classical” network values to tens of seconds.
  • Loss rate is variable; it is regularly over 10% nay 20%.

Analysis

To understand those download stops, we have to dive into TCP mechanism. Packet loss detection can happen in two ways:

  • Triple Acks : the server receives three acknowledgments from the recipient (the mobile) indicating the previous segment has not been received. The server just resends the missing segment while still sending next ones, overall the download is not perturbed.
  • Timout : the server has not received any acknowledgment for a while (the timeout). It will stop sending any other packet but the missing one while it does not receive the given acknowledgment.

As, at times, the flow channel is very slow, the TCP sending window is too small to have a triple ack loss detection, the server may detects packet loss by timeout. Moreover, the timeout value is computed by TCP with the latency measurement, this is why, given the channel latency high variation, it is not relevant; and may generate unnecessary timeout loss detection.

We then made a more accurate measure, packet by packet with Wireshark and Tcpdump, during a no download time.

Here is the use case we are observing: in some cases, given the channel high loss rate, during retransmission, the packet is lost several times in a row. This causes, according to Karn algorithm, the segment retransmission with an exponentially increased delay, generating very long waiting times; and, when the server is “classically” configured it just ends the connection, leaving the mobile client in a waiting state.

We then customized and recompiled a Linux kernel version in which we removed, inside TCP implementation, the retransmission delay increase. This modification reduced the waiting delay, but the flow rate was still lower than UDP.

Conclusion

We used the UDP protocol simplicity to caracterize the communication channel in poor cell coverage areas. Then we measured the TCP efficiency over this channel. Our main observation is that the channel poorness, combined with the TCP flow control algorithm, results in no download times. This is the phenomenom we, generally, deal with when experimenting erratic download lockups in poorly covered areas.

TCP is quite efficient regarding the poor transmission channel quality. A large scale server side TCP update does not seem possible: this is a very specific problem happening in just some unpredictable cases; this is why it is difficult to challenge the protocol mechanism. A higher-level improvement solution, for example by request segmentation and replays seems more easily reachable.

DroidCon Paris 2014

$
0
0

In September was held the second edition of DroidCon Paris, an Android conference organized by the Paris Android User Group and BeMyApp. The agenda was quite stacked, with a whopping 47 talks in just two days (videos have been made available here). Overall, these talks have been extremely interesting and full of relevant technical information.

We would like to go back on a selection of three among those who grabbed our attention the most. We highly recommend you to check those out by yourself, but we hope the following summaries will give you the basics.

The Death of the Refresh Button, by Mathieu Calba

Let’s start with a user-driven talk. The main idea behind this presentation was to offer solutions to limit the need for an internet connection when the user is using the application.

By syncing the data in the background (even when the app is not active), you could have all the relevant data for the user when he starts using the app. Obviously, this does not work for every app: for example, if your application has a news section, you might want to have the freshest data available and still make a network request when it becomes active. But for Capitaine Train (a French travel agency selling train tickets in Europe), the company the speaker works at, the user is able to access information on his travels, without the need for a network at the time of consultation.

The best solution available uses push to notify the application of updates. If the content of the update is small (for example, a delay), it can be included in the payload of the push. Otherwise, the push can just notify the application that an update is available, and the application can then make a network request for the content on its own terms.

When push is not an option, you have to fallback on periodic polling. The first difficulty is to choose when you want to poll, depending on how probable an update is, network availability, battery level, … You also want a solution that can survive a reboot of the device, so just a service won’t suffice.

The first solution is to rely on the AlarmManager for waking up your app. Problem is that all registered alarms will be cleared on reboot, so you need to register your app to receive booting events (requiring the RECEIVE_BOOT_COMPLETED permission).

A second solution is to create a SyncAdapter. This used to be the “Google way” before Android Lollipop. One caveat is that the user can ask to disable all periodic and automatic sync, which would disable your SyncAdapter. Also, this requires to write a significant amount of code for all the required pieces (SyncAdapter itself, AccountAuthenticator, ContentProvider, …)

The third and last option is the new kid on the block: JobScheduler. As of writing, it is only available in the Android L preview SDK. It relies on a system service (which is restarted on reboot, with its state persisted) and provides a lot of options for configuring the wake-up of your app (battery level, network state, timing, …). The benefits of this solution is to provide a simple API, with all the needed features, and to be easier on the battery since only one service will check for network status or battery level, and orchestrate the wake-up of the other apps. But the big limitation is that you cannot use it on any device running an OS older than L (which will take quite some time to become mainstream).

You can find the video here and the slides there

Deep Dive into Android State Restoration, by Cyril Mottier

There are a lot of ways on Android that your application state can be lost. For example, on configuration changes (screen orientation, keyboard, …), the default behavior is to have your activity restarted. The example given during this talk was of a form whose fields would be cleared on such a configuration change. You can also lose the state of your previous activities when navigating the application or when the application goes into background. So what you need to do is tell the system which information you would like to keep in these cases.

The store in which you will want to keep this information is a Parcelable object. Creating your own Parcelable object requires a lot of boilerplate code (which can be generated for you), or you can use a Bundle, which is a Parcelable key-value store. Some of the Android APIs will directly give you a Bundle.

You can persist your state in three places. First one is the most obvious one: your activities. When your activity is about to be stopped, the activity’s onSaveInstanceState(Bundle) method will be called. It will provide you a non-null Bundle in which you can add the information you would like to save. The default behavior of this Bundle is to save the Window, the Fragments and the Dialogs. This Bundle will be given back to you in the Activity’s onCreate and onRestoreInstanceState methods.

You can also save part of your state at the View level. When your activity is destroyed and onSaveInstanceState is called, the first step the system does is call saveHierarchyState on the Window. On top of saving the state of the Action Bar, the Panels and the id of the focused view, saveHierarchyState will call saveHierarchyState on the content View. Views have their state automatically saved by the system if: they have a unique ID (in the view hierarchy), come from the framework and are “save” enabled (true by default). The ID is necessary because the data will be stored in a SparseArray<Parcelable>, indexed by the IDs (which is also the reason why this ID must be unique). All framework views implement their own onSaveInstanceState and onRestoreInstanceState, but if you create your own custom view, you need to implement these methods yourself. Finally, you can manipulate what to save using the setSaveEnable(boolean) and setSaveFromParentEnabled(boolean) methods.

Finally, you can also save the state of your Fragments. The behavior is very close to saving the state of an Activity. The subtlety is that attached fragments have their views in the view hierarchy but those bound views are explicitely asked to be saved with the fragment and not with the view hierarchy.

The video is available here and the slides are on speakerdeck

Robotium vs Espresso, by Thomas Guerin

The goal of this talk is to compare two functional test frameworks: Robotium (inspired by Selenium) and Espresso (developped by Google). Those are independant of unit tests and focus on views and interacting with them.

Robotium’s API, taking its inspiration from Selenium, relies on the use of a Solo object, which you bind to your Activity. This object provides all the necessary methods to interact with your application (clicks, scrolls, …). You then write assertions to test if your application behaved as it was supposed to. The speaker was recommending the use of Hamcrest, which provides a full set of Matcher objects, designed to simplify the writing of your assertions.

Espresso’s API relies on 3 concepts: ViewMatchers, ViewActions and ViewAssertions. You use a ViewMatcher to find a view and a ViewAction to act on it (what Solo does in Robotium) and ViewAssertions for the testing itself. ListViews are treated differently: instead of matching on view properties, you match on the data that has been bound to your row.

With Robotium, WebViews can be interacted with, by using appropriate Solo methods, which allow you to navigate the DOM. Unfortunately, Espresso is still in a Developer Preview status, and WebViews are not currently covered.

If no view is found matching the requirements, Espresso has the advantage of logging the view hierarchy (and parameters of the views), which help a lot in understanding why the test has failed.

A big difference between the two frameworks concerns how to chain actions. Robotium requires explicit waiting instructions (where you provide what you are waiting for). On the opposite end, Espresso relies on GoogleInstrumentationTestRunner: it will monitor the UI thread to detect when it becomes idle, and then trigger the next instructions. This will allow an automatic detection of most of the waiting periods, but sometimes you need to wait for asynchronous operations. So you can give Espresso IdlingResources, which will make it wait until their isIdleNow method returns true.

In the end, the speaker recommends using Espresso, and to complement it with Robotium for testing WebViews (while waiting for Espresso to support them). One caveat is that Espresso is still a work in progress, and might still have a few bugs that can impact your tests.

Conclusion

This was only a small sample of all the talks that were offered during these two days. Once again, we highly recommend you to have a look at them.

Overall, this has been a very positive experience, with a lot of technical content that will take us quite some time to go through, and we will definitely come back next year.

Android L GUI v1.0

$
0
0

Today we are really excited to share with you some Android GUI Photoshop resources !

Last June Google I/O 2014 edition took place in San Francisco.
This event has been marked by important changes and strong stances on the future of Google interfaces.

The Android 5.0 Lollipop Material Design is going to have a major impact on web, mobile, wearables or car patterns.

We, @Applidium, were interested to start manipulating the composants of this new environnement. Therefore, we created a first set of Android L GUI for Photoshop and we are delighted to share it with you today.

You will find here below a link for download :

applidium_android_l_gui_v1.psd.zip

588.38 KoDownload

You can also download free Material Design icons pack on GitHub

Feel free to share and give us feedback on Twitter

Viewing all 64 articles
Browse latest View live