Quantcast
Channel: Applidium
Viewing all 64 articles
Browse latest View live

Audio Modem: data over sound

$
0
0

Even though our devices are increasingly connected, there are times when they can’t connect to the internet. In that case, you can still transfer some data from one device to another using peer-to-peer connectivity like Bluetooth, Wi-Fi Direct, or NFC. But all these solutions require some specific hardware and APIs which are not always available. On the contrary, we know that each phone has, by definition, a microphone and a speaker. That’s why we decided to come up with an alternative, AudioModem, that transmits data over sound waves.

Sound: limitations and benefits

In this approach, the most significant limitation is the bandwidth. Indeed, speakers and microphones cannot deal with a very wide spectrum of frequencies: outside the audible range they don’t work so well. So we have to use a narrow spectrum, therefore delivering a restricted bandwidth. But even if exchanging big files is out of the table, it remains a very convenient way to transfer contact informations, Wi-Fi passwords or short messages in general.

This solution offers a couple of benefits though. First, we do not have to rely on any infrastructure, since the hardware is already included in a lot of appliances (like TV’s and laptops) and could be easily included in others (such as doorlocks for example). In other words, this channel works out-of-the-box for smartphones, but can also be cheaply extended to a whole range of smart objects.

Another interesting point is that sound propagates in an omnidirectional fashion, making it perfect for broadcasting (reaching multiple recipients at once). Which also means that there is no need for pairing: the receiver does not need to make itself known to the sender before starting a communication (unlike Bluetooth). So a transfer would just require one action from the sender (starting to broadcast) and one action for the receiver (starting listening to all broadcasts).

State of the art

Other people already looked into data transfer over sound waves, and some implementations are already available like chirp.io and Yamaha’s Infosound (in Japanese). So let’s have a look at the choices they made:

  • chirp.io uses a bird tweeting sound: while this goes well with the theme of the app, using the whole audible spectrum prevents some interesting usage. For example, using ultrasonic frequencies one could be embed the data in a song or in the soundtrack of a movie without affecting the signal perceived by a human being. That’s probably why Infosound decided to only use the ultrasonic spectrum, and that’s the choice we made. Namely, we’re only using the 18.4kHz-20.8kHz range.
  • Both Chirp.io and Infosound require some network connectivity: they do not directly transmit the whole data, but only an URL, which then requires an internet connection to download the original data. AudioModem takes a different approach and directly transfers the data without relying on any external connectivity (this way it’ll work in the subway or in that bar where you never get any network).

Modulation choices

Once we settled on a frequency range, we needed to find a way to encode arbitraty data in (inaudible) sound waves. The method to do that is called modulation (precisely what your modem was doing in the 90’s, but using ultrasonic frequencies). The basic idea is to encode the data bits in the sound waves by varying some of the properties of the carrier wave (amplitude, frequency, phase, or any combination of those):

  • Amplitude modulation: it is easier to implement modulation and demodulation (and operations are faster), but this method is really sensitive to disturbances during the transmission (background noise, distorsion, …)
  • Frequency modulation: this method uses a larger bandwidth, which is problematic for us, since it makes it difficult to restrict ourselves to inaudible frequencies that can be emitted and received by basic speakers and microphones
  • Phase modulation: this modulation is more resilient than the other to disturbances. But most schemes require a coherent receiver, which is more complicated to implement.

In our case, we implemented a phase modulation variant, DBPSK, which removes the need for the coherent receiver by making the data redundant. In the end, its advantages are:

  • It is more robust than amplitude modulation as it is not that vulnerable to noise.
  • It uses less spectral width than frequency modulation so it’s easier to transmit in the inaudible part of the spectrum.
  • It can be demodulated with an easy to implement receiver (known as a non-coherent receiver).
  • It offers an acceptable theoretical data rate.

DBPSK in a nutshell

Phase modulations, including DBPSK, encode bits of data in the soud wave (called the carrier wave) by modulating its phase. In its simplest form, the binary digits (0/1) are mapped to two different values of the phase (0°/180°). For example, here’s what the wave looks like when sending the letter ‘e’ encoded in ASCII (01100101) for PSK:

The difference DBPSK introduces is that instead of sending the raw data (01100101), we transform it by replacing each bit with the XOR of this bit and the previous transformed bit, starting with an arbitrarily chosen bit (1 was chosen, which gives us : 110111001).

Additionally, we need to transmit some synchronization code before transmitting our data, otherwise the receiever cannot tell where the data begins. We chose to use a Barker code for this purpose. Our transmission will then look like this:

Finally, such a method usually requires some forward error correction in order to reduce the sensitivity to disturbances.

Implementation

We decided to implement a proof-of-concept prototype, and we settled on iOS because it provides us with a few libraries for signal processing. Namely, we used vDSP (part of the Accelerate framework) which gives a set of mathematical functions, and Audio Queue Services (part of the AudioToolbox framework) to access the speaker and microphone.

The source code is available on our GitHub account

This prototype has an input field where you can type the text you want to send. When you press the “Broadcast” button, the data is continuously broadcasted alternatively with the Barker code. When another user is close enough to pick up the signal he can press the “Receive” button to get the data on his phone. It’s as simple as that!

Final thoughts

Although fully functional, our implementation still lacks major components like error correction and an improved UI. We just wanted to twiddle around with the concept of sending data over sound waves and we hope that you found this interesting. Don’t hesitate to react to this blog post on Twitter.


Photoshop template of iOS 7 / iPad

$
0
0

A few hours after the first beta release of iOS7, we had the pleasure to share with you a PSD template of iOS 7 on the iPhone. Today we’re introducing the sequel, now targeting the iPad.

In this version, you will find the following elements:

  • Status bar
  • Status bar landscape
  • Safari navigation bar
  • Safari navigation bar landscape
  • Notification center
  • Control center
  • Control center landscape
  • Keyboards
  • Settings list
  • Popover

ios7_ipad_gui_psd_v1.zip

2.31 MoDownload

An updated version should be available soon, but in the meantime feel free to give us your feedback on Twitter or by email.

Securing iOS Apps - Patching binaries

$
0
0

In this previous article, we showed how to use debuggers to decrypt an application and retrieve its symbol table. With the symbol table, we have access to the name of the methods and constants used in the program. Thus, we can infer its internal architecture and change its behavior. However, this is a runtime approach which can quickly become painful to use every time we start the application. In this post we will focus on a way to permanently alter a program: modifying assembly code in an application executable file, also known as binary patching.

On iOS, the executable part of an application is a Mach-O file. It stores machine code and data (see the Mach-O File Format Reference). In particular, this file contains the executable code of the application, which is the interesting part here.

iOS applications that run on-device are compiled for the ARM architecture. That means that you must have good knowledge of ARM assembly to alter the executable code. As this is a subject in its own right, we won’t go in depth in this post (no architecture detail, no compilation, no disassembly). For now, just remember that we call opcode the hexadecimal code of an assembly instruction. For instance, in ARM/Thumb-2, the nop instruction (“no operation”) has the opcode: 0xbf00.

To bypass applications securities, we need to find the protection in the assembly and to change associated opcodes. For example we can modify the result of a if condition just changing a bne (branch if not equal) instruction into a beq (branch if equal) instruction. Or we can load specific values in the registers to change the return value of a function.

Modify executable code

Remember the ptrace function discussed in detail in our previous blog post? This function can be used to prevent any debugger to be attached to the application when it’s called with the PT_DENY_ATTACH parameter. As hackers, we would like to get rid of a ptrace call in an application and replace it with “nothing”. Well, we can (“nothing” being a nop instruction)! This way, the application will never call ptrace and we will be able to attach a debugger to it.

Let’s assume we found in the executable file the specific address (0x5f5b2) where the ptrace call is made, thanks to some GDB and otool action, for instance (not detailed here). Here is the output of otool:

0x5f5ae:  dd f8 0c 90    f8dd900c    ldr.w  r9, [sp, #12]
0x5f5b2:  c8 47          47c8        blx    r9            # call to ptrace
0x5f5b4:  0a 99          990a        ldr    r1, [sp, #40]
0x5f5b6:  01 90          9001        str    r0, [sp, #4]

To bypass the call to ptrace, we just need to replace with an hex editor the code 0x47c8 contained at the address 0x5f5b2 with the code 0xbf00 that is the opcode for a nop instruction. After the modification, the binary will look like this:

0x5f5ae:  dd f8 0c 90    f8dd900c    ldr.w  r9, [sp, #12]
0x5f5b2:  00 bf          bf00        nop                  # changed with nop
0x5f5b4:  0a 99          990a        ldr    r1, [sp, #40]
0x5f5b6:  01 90          9001        str    r0, [sp, #4]

When this piece of code will be executed inside the application, the nop instruction will do nothing and will substitute the call to ptrace. Thus, the application won’t be able to disable future debuggers attached.

Compute a binary signature

An attack at the binary level is difficult to counter. If a hacker is capable of re-writing assembly code, he will certainly be able to bypass all securities we added to our iOS application.

We don’t remain totally helpless though. For example, we can try to detect any modification made to the code. Inspecting the Mach-O header at runtime, we can check the integrity of the executable code. This is actually pretty simple: we just have to go through the load commands of the Mach-O structure, find the assembly section (i.e the __text section of the __TEXT segment), and compute a cryptographic hash of the binary data. This signature (or message digest) will be unique and will change whenever the code is modified between the compilation and the execution.

In the following example, we are showing a function that computes and checks the signature integrity of an Objective-C program. In this example, we will use the MD5 algorithm to compute the digest.

#include<CommonCrypto/CommonCrypto.h>#include<dlfcn.h>#include<mach-o/dyld.h>int correctCheckSumForTextSection() {
  constchar * originalSignature = "098f66dd20ec8a1ceb355e36f2ea2ab5";
  conststruct mach_header * header;
  Dl_info dlinfo;
  //if (dladdr(main, &dlinfo) == 0 || dlinfo.dli_fbase == NULL)
    return0; // Can't find symbol for main//
  header = dlinfo.dli_fbase;  // Pointer on the Mach-O headerstruct load_command * cmd = (struct load_command *)(header + 1); // First load command// Now iterate through load command//to find __text section of __TEXT segmentfor (uint32_t i = 0; cmd != NULL&& i < header->ncmds; i++) {
    if (cmd->cmd == LC_SEGMENT) {
      // __TEXT load command is a LC_SEGMENT load commandstruct segment_command * segment = (struct segment_command *)cmd;
      if (!strcmp(segment->segname, "__TEXT")) {
        // Stop on __TEXT segment load command and go through sections// to find __text sectionstruct section * section = (struct section *)(segment + 1);
        for (uint32_t j = 0; section != NULL&& j < segment->nsects; j++) {
          if (!strcmp(section->sectname, "__text"))
            break; //Stop on __text section load command
          section = (struct section *)(section + 1);
        }
        // Get here the __text section address, the __text section size// and the virtual memory address so we can calculate// a pointer on the __text section
        uint32_t * textSectionAddr = (uint32_t *)section->addr;
        uint32_t textSectionSize = section->size;
        uint32_t * vmaddr = segment->vmaddr;
        char * textSectionPtr = (char *)((int)header + (int)textSectionAddr - (int)vmaddr);
        // Calculate the signature of the data,// store the result in a string// and compare to the original oneunsignedchar digest[CC_MD5_DIGEST_LENGTH];
        char signature[2 * CC_MD5_DIGEST_LENGTH];            // will hold the signature
        CC_MD5(textSectionPtr, textSectionSize, digest);     // calculate the signaturefor (int i = 0; i < sizeof(digest); i++)             // fill signature
          sprintf(signature + (2 * i), "%02x", digest[i]);
        return strcmp(originalSignature, signature) == 0;    // verify signatures match
      }
    }
    cmd = (struct load_command *)((uint8_t *)cmd + cmd->cmdsize);
  }
  return0;
}

Once this function is integrated in our application, we can check if the original signature (the one computed when we built our project) matches the current signature of the application (the one computed at runtime). If this is not the case, it means that somebody tried to patch the program.

Conclusion

This solution is just one way to check the integrity of an executable file. It can be easily bypassed: in this case, the hacker would just have to re-sign the application. As always, this is only the start of the endless cat-and-mouse game with people trying to bypass security measures, so make up something really hard to crack and your application will last longer than competitors.

Enjoying our security series? Having an idea for a future episode? Tell us on Twitter!

App Makeover #02 - Amazon

$
0
0

For this second episode of our “App Makeover” series, our design team chose to focus on one of the most important actors in the m-commerce sector, Amazon.

The latest market studies show that mobile accounts for up to 40% of global revenues of the major e-commerce actors. Therefore retention and user experience on mobile are key success factors, no matter what you are selling.

Facts

Despite a certain frugality, the iOS Amazon app appears to be one of the leaders of the e-commerce industry. At first sight, it presents a straightforward interface with its famous killer feature, one-click-purchase. Nevertheless, we think its global layout is still very “classic” and web-like.

Our global feeling is that the Amazon design team only adapted the web version to the smartphone screen. As a result, the omnipresence of text content (long sentences that do not properly fit in the interface) seems like a major drawback. Given Amazon has a lot of rich media (pictures, videos, ratings) this really seems like a missed opportunity.

Our position

From our understanding and expertise, key performance indicators on such apps are the global reactivity (e.g. time needed to access product details), the power of comparison tools and the overall experience smoothness. We will detail below enhancement opportunities on Home, Stores, Product details and Search sections.

We also leveraged the iOS 7 guidelines for a global UI lifting.

Findings

Home

First, we decided to keep the tab bar which really facilitates the understanding of the navigation. To make the interface simpler we chose to use only four tab items, and we put other screens in the settings section.

Since logging in is not mandatory to browse the Home section, we put the login button in the navigation, therefore saving some important screen real estate. On top of that, this also provides us with a direct access to the “My account” section.

Moreover, we chose to put an emphasis on the Promotions area with a vertical scroll as customers are used to perform this gesture to load content.

Last but not least, we kept the search bar as a major element of the Home section.

Stores

Stores can be also called Product categories. We just refreshed this sections following Apple’s recommendations for iOS 7.

Product Details

The Product Details was the most difficult part of our app redesign since the current version is way too web-alike from our standpoint. Therefore our intention was to put all the useful information at first sight, reducing the need for scrolling. By useful information, we mean price, discount, users’ ratings, sharing button, pictures and of course the “add to cart” button.

Others information were split into 3 tabs – Description, Users’ feedback, Similar Products – to avoid long distance scrolls. This tab menu remains present while scrolling for a quicker access to others sections.

Search

The original Amazon app initially leaves a large area blank on its search page. We saw this as an opportunity to offer more advanced search features (Categories and target price) in addition to the classical and simpler search menu. We thought it would be the most best solution to let the customer find exactly what he is looking for.

Conclusion

So here’s our take on redesigning the Amazon app. We hope you enjoyed it: feel free to give us your feedback on twitter.

ADTransitionController improved for iOS7

$
0
0

In one of our previous blog posts, we introduced ADTransitionController, an open source library making it easy to add 3D transitions to iOS applications.

A few months ago, Apple released iOS 7 which brings tons of handy tools for developers, including some new APIs for easier transitions. Guess what? We updated our library to take full advantage of the new APIs and made it even simpler than before!

In iOS 7, Apple redesigned the entire process of animating the presentation of view controllers. The new transitioning APIs they provide are powerful, fully customizable yet complex to use. That’s why we did the hard work for you and simplified them. Plus, you get beautiful pre-defined transitions (cube, carrousel, fade…), and you will be able to use ADTransition models not only to push and pop viewControllers as before, but also to present and dismiss modal views.

Unfortunately, this won’t be available for you if you still need to support earlier versions of iOS, but in this case you can still use our previous API with the same transitions.

The complete overview of the library can be found on our project website. Download the demo project on our GitHub repository and give it a try. Custom transitions have never been that easy!

12 key resources for a mobile project

$
0
0

Here at Applidium, we design mobile services for and with our clients, thus we make extensive use of various tools in order to improve our efficiency.

Today we chose to share the tools we trust, besides the classical Photoshop/IDE/IM combo.

1. Design

Product definition is a key part of the project and having everyone – the client, project managers, designers and engineers – on the same page remains the best way to move fast and deliver properly. To be efficient, in a nutshell. Our product definition methodology is mostly supported by human and live interactions, nevertheless some tools usually help the project team to share a vision and clear tougher parts of the project ahead.

Text 2 Mindmap helps us a lot while doing live sorting and ranking of the project features. In the same pattern, we use OmniGraffle to quickly mock up features blocks and do realtime interface zoning, prior to giving our interaction designers the lead to iterate on more detailed app storyboard.

Talking about design, we usually invite our clients to a design presentation, and since we’re all used to interactive screens, we usually build for them an animated prototype with Flinto. It helps – a lot – to visualize what the final product will be like.

2. Development

In this post, we won’t talk about IDEs nor performance tools since you probably already know them. However, we’re used to building highly connected services, usually plugged into core business IT infrastructures. Therefore we have to pay a lot of attention when designing, prototyping, testing, and documenting network interactions. Since most of those are based on RESTFul web services, Apiary.io has been a very valuable tool for this job.

3. Testing

Another key part of an app project, often neglected, is testing. Since we are designing embedded software, we have to monitor and test behaviors in various contexts: low battery, incoming calls, bad network connectivity… and so on. For this purpose, Apple’s Network Link Conditioner and HTTP Scoop are useful tools to test and monitor apps and servers interactions. We also noticed that while these tools are great, they cannot prevent us from testing on a targeted pool of real devices. We usually select the market’s best sellers to provide an optimal experience to as many users as possible.

Last but not least, making private beta releases available is really beneficial while preparing an official launch. It’s also a convenient way to give a sneak peak of your product to opinion leaders. Beta versions also help you gather very useful feedback such as crash logs and qualitative and constructive evaluations by people outside the project team. Hockey app does fantastic job on this topic.

4. Collaboration

Designing mobile services put us at the crossroads of very different domains such as IT, design, marketing, sales and CRM. The people within each of those departments may have specific cultural backgrounds and sometimes not fully aligned objectives. Therefore, we, at Applidium, decide to put the stress on enforcing communication by selecting the right channel for the right purpose: no chatrooms, no archaeologic email threads and no megaphones.

To communicate with our clients, we rely on Basecamp and use OmniPlan to provide everyone with understandable schedules. We keep emails for formal internal communications, and use Adium for everything else. Communication between project managers and engineers is sanctuarized on a Redmine tracker. Furthermore, as we manipulate large assets, we share a global repository within the project team with all the project resources.

We hope these tips will help understand how we work at Applidium. We’re also very curious to learn from you, feel free to join the conversation on Twitter.

Through the prism of Glass

$
0
0

Google’s sneek peaks of the GDK (Glass Development Kit) lets us envisage Google Glass as an interesting platform when designing mobile services.

Key changes with the introduction of this new development kit are all about Google’s willingness to open up their platform. We now have :

  1. Access to hardware features (camera, microphone, gyroscope, touchpad)
  2. Glassware that can work offline
  3. A Mirror API that we can still use but that is no longer the unique entry point.

As we started manipulating both the device and the toolkit, we started wondering about the synergy between mobile apps and glasswares, and what we could take from one paradigm to the other. Here are our first findings.

It’s all about the context

Putting an interface between our eyes and the world may looks a bit odd. Having tried a pair of Google Glasses, we confirmed that this is indeed strange, but only at first.

After a few seconds of adaptation and playing with the controls, we found it very simple and easy to use. Nevertheless we realize that the real impact in term of experience will be in proposing the right piece of app for the right scenario. And when we say “piece of app”, we actually mean it. We think you cannot transfer the entire feature set of an existing app – even the most simplistic – directly on Glass. We have to use a combination of sensors and voice recognition to suggest the relevant information or tool within the prism. Just as in the traditional eyewear industry, people wear shades when it’s sunny, and magnifying glasses when reading tiny typos. That’s why we need to think context first.

Natural UI is the new “Less is More”

Besides the device itself, Google introduces a new interface paradigm with Glass. In a nutshell, the navigation follows two axis:

  1. The sequence of activities is displayed right to left – from the past to the future.
  2. One can “dive” within an activity by swiping down.

More over, the search menu is accessible from anywhere – wait, is Google behind this? – with the vocal command “OK Glass”. In term of design, screens are standardized and have to follow a card template, focused on readability and clarity. As you can imagine, this “Natural UI” is pretty frugal and leaves very little room for creativity. Thus, it guides both designers and engineers (just as mobile apps SDKs did a few years ago, compared to web standards) and help them focus on essentials: the service to deliver.

Different form factors, same skills

Despite its specific UI principles, the Google Glass is built atop of Android 4.0.3. Therefore no specific technical skills are required to master the GDK, apart from a different mindset when designing your glassware. To look back and make a comparison, one could say that glasswares have a lot in common with Android widgets that appeared on Android mobile apps not a so long ago: smaller screen real estate, a search for instantaneity, etc…

OK Glass… what’s next?

We see in Glass tremendous opportunities to enhance the mobile services we are creating by proposing new solutions to fluidify the user experience… and this is just the beginning. One may compare this GDK announcement to the Apple iOS SDK release in 2008… a few months before the App Store.

In the meantime, we are developing our first glasswares these days and will be delighted to share this experience with you.

Manage your app availability on Google Play

$
0
0

This blog post follows this one about the Android Manifest, on the topic of publishing on Google Play. Here, we will discuss Google Play’s Developer Console. It provides useful tools for published apps: statistics, backtraces for crashes and ANR (application not responding). But we will focus just on the publishing part.

Developer Console

After developping and testing your application, you can export it. This will get you an APK file (for Android package) that you can distribute through Google Play thanks to its developer console.

Devices compatibility

When importing a new APK file and putting it into production, you will be able to see a list of compatible and incompatible devices. Unfortunately, it is currently impossible to know directly why some devices are incompatible. It might be because of a too restrictive permission but you will have to guess which. Besides, if you make changes to your manifest file you cannot know if it affects the compatibility with the devices until you put the new APK into production.

Exclude devices

You may also want to exclude devices (i.e. your app will not appear in Google Play on these devices) because of known issues (e.g. video player issues, missing codec, etc… ). That exclude list is easy to manage and if the problem is solved later, you can always edit that list.

Multi-APK support

Another feature for publishing is the multi-APK support. It allows you to upload several APKs for a single app.

You may want to use it because you want to use features introduced in Honeycomb (e.g. HTTP Live Streaming) but still have a strong user base using Gingerbread (more than a third of devices as of July 2013) that forces you to offer a compatible version of your app. You can get the number of (in)compatible devices for each version but also for that multi-APK. A device will get the APK with the highest code version it is able to execute.

New release features

Google introduced new release features: alpha / bêta versions and staged rollouts. Basically, the first one will allow you to select testers, provide them with the alpha / bêta version through Google Play but they will not be able to leave a public review in the Store. Thus you need to provide a feedback channel to your testers. The second one will allow you to release your app little by little. As long as a staged rollout is not completed, you won’t be able to update your production app. You may find more information here.

Known issues

There are few major issues with Google Play but in some configurations this can be quite annoying. For instance:

  • Payment in Google Play is not available in all countries. For example, if you have a paid app don’t expect to publish it (for the moment) via Google Play in China! This also applies if you have a freemium app. Here is a list of paid app availability on Google Play.
  • Some users might have problems updating their app. It is a current problem that Google still did not solve (error RPC:S-5:AEC-0). Here is a list of known issues on Google Play

Summary

Google provides a pretty good Developer Console, at least for publishing. This is very important, since you can iterate very fast with short cycles as opposed to Apple’s App Store. Despite all the efforts put in place to address fragmentation issues, it is still hard to ensure that an app will be available and work on all devices. Sometimes, an update may break an app (just take a look the bad reviews for some apps). But it is often on devices listed as “OTHER”.

Hopefully, Google is heading in the right direction to reduce fragmentation issues:

  • Google Play Services will include APIs allowing developers to use the latest ones
  • Android Studio, the new IDE, integrates a tool to see how your app will display on various screen sizes (it was also added in Eclipse ADT v21, since November 2012)
  • Alpha/bêta/staged rollouts release on the Store to test new features on a limited set of users

Animate your collection views

$
0
0

At Applidium, we love to add animations to our applications. They add context to the interface, and are a beautiful addition to a static UI. In fact, we enjoy them so much that we open-sourced some of our work on this topic (ADLivelyTableView, ADTransitionController). Today, we are adding a new library to the list: ADLivelyCollectionView, a port of ADLivelyTableView to UICollectionView.

The library comes with pre-defined animations. They are here mainly for demo purposes and we encourage you design your own, in order to produce more subtle visual effects. It’s less than ten lines of code to write!

ADLivelyCollectionView was designed as a subclass of UICollectionView, making its integration as simple as adding two files to your project and turning your instances of UICollectionView into instances of ADLivelyCollectionView.

You can find the source code on our GitHub repository. If you have any comments about the library, we’d love to hear from you on Twitter!

Good practices for mobile forms

$
0
0

It is getting hard to publish an app where forms are not mandatory. Users are exposed to input data through forms in order to access the service (login, registering), achieve a task (searching, checking out) or complete secondary actions (for instance getting in touch with the editor). Constraints due to the mobile environment (screen sizes and the digital keyboard among many others) make forms a major traffic leak. To the question “What makes you uninstall an app?”, 38% of Android users answer the need to fill a form for registering. Let’s take some time to think about problems users can encounter.

This post is the first of a series coming along with our new app optimization activity.

How to avoid typing

How to limit typing

The keyboard can be a major source of frustration for users, here are some tips to limit its use in a mobile context.

Use mainly action buttons, native gestures (tap, swipe) and native features (tap-to-call, GPS, camera).

  • The application SeLoger (v3.2.0) allows users to draw their search zone on the map instead of typing it.
  • The application trivago (v1.80) cleverly uses action buttons for search filters.

Only display mandatory fields, the best practice is to place optional fields in a folded area.

  • Opodo (v1.0.0) only displays mandatory fields where optional ones are reachable through a “More options” button.

Simplify typing

When you cannot get round typing, useful tips can make it easier for users.

Correctly use pickers by grouping what could be: for instance, the selection of a day, month and year can be in the same picker.
Tip: if a user must choose a departure and a return date, it could be easier to have both fields on the screen at the same time and not to close the picker when tapping on the next field. The application liligo (v2.6) does that correctly.

  • This kind of picker can be reused for other purposes: eDreams (v1.8.0) unites in one picker the choice for how many passengers you want to book the flight regarding their age Adults/Children/Babies.

Displaying the correct keyboard is mandatory: for typing an email the “@” must be displayed, for a phone number there is no need to have a keyboard with letters.

Think about input masks, they automatically format typing in a field. Users are informed on how to type what they have to and are reassured they typed properly.
Tip: input masks must be used diligently with a correct wording and do not exclude simplifying the experience by allowing multiple masks forms.

  • When registering on Twitter (v5.8) the user does not ask himself if he must add the “@” in front of his user name, it is already in the field.

Knowing and guiding

Make a guess on what the user is about to type

If the user finds it painful to type an information once, typing it several times will probably make him give up. Here are a few tips to avoid this situation.

Autocompletion and suggestions can help the input: for instance if a user types “user@gma”, it would be welcome to suggest “user@gmail.com”. Errors are limited and input shortened!

Validate the correct form of the input while it is done with a visual clue: you prevent the user from filling the entire form and discover that many fields must be filled again.

Try to fill automatically some fields: if the user is asked for a zip code, a town, and a state, the zip code should be enough to determine the rest of the fields. Note that it would be clever to display them in an order allowing to do so.

You can also fill a form by using informations you can find in the personal card of a user from the address book.

Default parameters can save time. You can guess or know what choices will be made by most of your users. You can untick the newsletter registration, or know that most people look for a double bedroom in hotels around their position and for tonight.

  • The application of the dating service Twoo (v3.13) pre-filled the form for a man, single, looking for a woman which is certainly its main audience.

Reassure

You have to reassure users when they type personal data. They will be reluctant to give phone numbers or email adresses. Language elements can be used to inform users they will not be spammed for instance.

The lost of typed information can be irritating. Make sure information is saved and reassure users that a tap on the back button will not make the data disappear!

Make it understandable

The clearer the presentation and the workflow are, the more you increase your chances of users completing entirely forms.

Make it easy to access the next field while users are typing, especially when the keyboard is out.

  • Vine (v1.3.1) does a fantastic job by making all fields visible when the keyboard is out.

Organizing fields by theme free the user from potential headaches due to dealing with information of different natures.

Wordings placement is key. They can be on the left outside the field which can lead to space issue on small devices. You can place them above the fields if you have no problem with stretching out your form. Another solution is to merge the wording with the input mask, you gain space but the wording will not be displayed once the user has started typing. This last solution is OK for short forms, with less than four fields.

  • Polar (v1.4.7) places wordings on the left, artfully merged with the design
  • Even better, Instagram (v4.1) uses fields with pictograms to avoid the loss of wording when typing. These pictograms also give the validity of what has been typed by switching colors between green and red.

Input masks and wordings inside the fields must not repeat the field name even if they are erased when typing. They give great information on the format of the information requested.

Use cases

Forms are vital in certain types of applications. We will describe two major categories: those requiring registration and m-business applications.

Registering

Social plugins for registration must be looked at closely. They allow users to skip a painful form.
Their integration requires to follow some guidelines which will be described in an upcoming blogpost. Are you thinking about Facebook and Twitter? Many other services allow for a quick authentication. You should find out which ones are relevant with your audience.
Tip 1: do not hesitate to tell your users nothing will be posted on their profiles without their approval.
Tip 2: promote these quick registrations!

You cannot avoid typing for one thing: the password. The mobile environment specifics allow you to deal with this differently from a web platform. It has been demonstrated that hiding the password on a mobile device does not increase security and on the other side several fails come from mistyped passwords. The best practice is to show the password by default while giving the opportunity for the user to hide it. The confirmation field is therefore optional.

  • The application Polar (v1.4.7) from Luke W. shows us an elegant integration.

Asking a confirmation for email addresses seems useless. It could be more clever to use suggestions to limit typing errors.

A good practice to speed up registration is to ask for the minimum amount of data and to suggest completing the profile later while demonstrating the benefit of a complete profile.

  • Dating service applications often remind you that a profile with a picture boosts your chances to meet someone!
  • SoundCloud (v2.6.3) does not ask for a username when registering and gives one by default. Users can change it later in the settings.

m-business

On top of asking for registration, m-business services also integrate forms for the check-out.

Downsizing the number of fields is key for check-out funnel (which can be very long). Many tricks can be used, for instance instead of asking for a surname and a name, a field “Surname Name” can be displayed. Most of the time the billing address and the delivery one are the same : therefore, it is interesting to automatically fill the second one while giving the ability to edit it.

Allowing alternative payment services like Paypal or Google Wallet would probably make the check-out easier for users of these services.

Registering is probably not what the user wants to do when buying something, a guest checkout should be available. If the whole experience is satisfying, the user will come back and have incentives to complete his profile.

Native features can be useful: you can suggest retail outlets nearby using the GPS or call assistance in one tap. The camera can be used to scan bar codes to find a product or even credit cards to pre-fill banking information.

  • The application Club Privé Vacances (v1.1) uses tap-to-call when checking out.
  • Amazon (v2.5.1), like many m-business applications allow to search for product by scanning bar codes.

Filling banking information can be stressful for users. You should reassure them about privacy and security concerns to increase chances users will go through the entire process.

These forms are usually very long, it is useful to give context to users with breadcrumb or a progression bar.

Conclusion

While many applications offer great user experiences for filling information, we wanted to highlight major issues through this blogpost. Knowing your audience, being aware of the offered experience and a touch of good sense will help you increase conversion rates on your mobile forms. As these improvements rest upon details, a good analytics integration is essential.

What's geofencing?

$
0
0

Geofencing is a technical word for a quite simple concept: it’s all about setting virtual areas and monitoring a smartphone entry and exit of these areas. The main interest is to allow a developer to fire notifications (based on geolocation) even in the background or when the app isn’t running. On iOS, if your app was not active, it can be awakened for a few seconds to handle the geofencing event. The goal of this article is to summarize basic knowledge on the subject, through the eyes of an app maker.

Geofencing in iOS and Android

In its developer documentation, Apple communicates on the terms “regions” and “region monitoring”. It has been available since iOS 4.0 and has been improved.

On Android, geofencing is quite recent (May 2013): it is an addition to Locations Services in Google Play services APK. It gives the developer an abstraction layer and therefore avoid managing various complex cases.

Both implementations define a circle around a point. If you need to define more complex areas, you will have to take a look at third party solutions.
Here is a list of a few things to know about region monitoring / geofencing in iOS and Android:

iOS’s region monitoring Android’s geofencing
Availability Since iOS 4.0 Since 1.6, Location Services required
Background execution Yes Yes
Number limit of monitored geofences for an app 20 Illimited
Possibility to set an expiration duration No Yes
Testing In Simulator or on a device Using Mock mode

With a little imagination, geofencing can bring cool features to your app. Here are a few examples:

  • In a news app: to refresh content before taking the metro
  • In a calendar app: to remind you to collect your clothes at the dry cleaner
  • For a settings changer app: when leaving home, “on the go” features can be enabled
  • In an ambient location social network app: to have an idea of where your close friends are.

Some apps use it cleverly:

  • Instapaper, a news reader (e.g. refresh your news feed when leaving home)
  • IFTTT, a service that lets you connect an action to a trigger (e.g. unmute your phone when you get home)

Accuracy and battery consumption

This part will discuss the compromise between good accuracy and battery life. GPS is very battery consuming. Thus in order to save battery life, geofencing does not rely on it. Instead, it uses cell tower information and Wi-Fi which provide a quite fair accuracy (if available!) and are less energy intensive. It’s all about the compromise between good accuracy and battery life.

Accuracy

Yes, cell tower and Wi-Fi help your smartphone to locate itself. How? It uses triangulation and relies on the locations data Apple / Google collect. Accuracy highly depends on density of Wi-Fi routers or cell towers: triangulation typically works poorly in rural environment.

To give you an idea of the range of accuracy, Google provides the following indications:

Type Accuracy Notes
GPS“several meters” (5-15m) does not work indoor
Wi-Fi “similar to the access range of a typical router” (20-200m) fair accuracy
Cell ID “distances up to several thousand meters” (500m-10km) depending on antennae density

Therefore, you should not use geofencing with iOS and Android SDKs and expect having a very accurate location. iOS’s implementation added a threshold (which is not documented for Android but it may be the case as well) to prevent spurious notifications:

The user’s location must cross the region boundary, move away from the boundary
by a minimum distance, and remain at that minimum distance for
at least 20 seconds before the notifications are reported.
The specific threshold distances are determined by the hardware and the location technologies
that are currently available. For example, if Wi-Fi is disabled, region monitoring is
significantly less accurate. However, for testing purposes, you can assume that
the minimum distance is approximately 200 meters.

Thus the questions you should ask yourself when designing your app are:

  • Do my users live in town? (dense Wi-Fi routers and cell towers coverage)
  • Do my users let the Wi-Fi running when they use my app? (some may deactivate it when outside, thus decreasing accuracy of location)
  • Do I need the notification to be triggered immediately after the user crosses the geofence?
  • Is the worst case accuracy acceptable for my service? (potentially very poor in a rural environment)

Battery usage

Since geofencing only uses cellular network and Wi-Fi capacities, it should not be really battery consuming. However, users of iOS’s built-in region monitoring in “Reminders” complained about an abnormally short battery life. Geofencing’s impact on battery life will be further discussed in our next article on the topic.

Google introduced hardware geofencing with Jelly Bean, which is more power efficient than performing location computation in software. It is available on Nexus 4 and 7 (2013) and maybe others but neither Google nor OEMs communicated on it after Jelly Bean’s release.

They also added a Wi-Fi scan-only mode (software), which is exactly the same as in iOS.

Wrap-up

Geofencing may help you add discrete but awesome features to your app when the use case is well-suited (i.e. if the accuracy suits your needs). You will have to test battery drain but if it turns out that it is acceptable, nothing should prevent you from using it!

Both Apple and Google provide developers with quite high-level methods to use geofencing in your apps. To make it reasonably battery consuming, it uses Wi-Fi and/or cell tower triangulation. The drawback of that technique is that it has a limited accuracy and highly depends on the density of transmitters but it can be sufficient to your use cases.

On Android, you can wake up your app with geofencing and then retrieve a GPS fix to improve the accuracy of the location. But if you cannot rely on geofencing because of its coarse precision, you will have to look for another solution. You may have heard of iBeacon, which can help on indoor location but this will be for another post.

ExoPlayer: Google's new Android media player

$
0
0

This week, Google held its annual developer conference, Google I/O. While the keynote was massively focused on Android (with announcements for Android L, Android Wear, Android Auto or Android TV), it was still too short for all Android-related news. Luckily, Google provides additional sessions during I/O which yield tons of information.

During the Android Q&A (Fireside chat), ExoPlayer, the player currently used in the Youtube & Play Movies applications, was mentioned for the first time, and expected to be available « soon ». Since we, at Applidium, spend a lot of time working with multimedia, we were excited to hear more about this. Google, true to its word, released ExoPlayer yesterday. They also provided a brief introduction to the concepts involved.

What ExoPlayer is

MediaPlayer, the default Android multimedia player is very monolithic : you provide him with the path to the media you want to read (for example, an URL), and let him handle the media to the best of its possibilities. It makes for a very simple integration, but very difficult extensibility.

With MediaPlayer, all the player logic is hidden

With Jelly Bean (4.2), Android was provided with new APIs for low-level media manipulation, like MediaExtractor, MediaCodec and MediaCrypto. This means that you can now build your own media player. But, it also means that you need to build it from scratch, which can be a significant investment. Also, this limits your application to Jelly Bean (and higher) devices, which (as of June 2014) excludes 28% of potential users.

With Jelly Bean’s APIs, you have to provide the Player logic

ExoPlayer is an attempt to bridge the gap between these two extremes. It provides you with implementations for the different steps of the player logic (Networking, Buffering, Extraction, Decoding and Rendering), yet still leaves you free to swap in your own components.

This provides you with a lot of flexibility. For example, if you want to support a new format (and provide your own codec), or add DRM to protect your content, you can do so without building a whole new player from scratch.

What ExoPlayer isn’t

ExoPlayer provides only background changes. If you replace your current MediaPlayer with an ExoPlayer, the result will be undistinguishable for the user.

This also means that it brings no user-side features, such as fullscreen which would have been a nice addition.

Features

Out of the box, ExoPlayer provides support for DASH and Smooth Streaming, which are adaptive bitrate streaming techniques (meaning the quality of the video is adapted to the client’s capacities, namely bandwidth and CPU usage).

This new architecture also makes it simpler to include DRM support. The examples provided by Google show how to include support for their Widevine solution.

This demo also show how to allow the user to switch between different video and audio tracks, if you add the necessary components to your interface.

Finally, this pipeline also makes it easier to improve performance. Since you can improve each step independently, you can smooth out the bottlenecks you notice. Google claims that integrating ExoPlayer in Play Movies allowed them to reduce startup playback latency by 65% and decreasing rebuffering by 40%.

Integration

Good news is : ExoPlayer is not part of the L release ! Actually, it is not even part of the support library, but lives as its own standalone project. You can start including it to you projects right now.

It requires API level 9 (Gingerbread) or higher, which covers 99.2% of the user base.

As mentionned previously, Google offers a demo implementation on the Github page of the project. Google also updated its documentation to include ExoPlayer.

Conclusion

In the end, this low-key announcement has definitely grabbed our attention. In the coming weeks, we are going to experiment with the new available features, such as new formats, codecs, audio track switching or DRM. Since this projet has actually been in use for YouTube and Play Movies, we are not expecting anywhere near as many bugs as a newly released media player could have. Our only regret so far is that no new user-facing features have been included in this new release.

Creating a relevant suite of test devices

$
0
0

“What does having a relevant pool of equipment mean when I have to test applications? If I’d like to create a test set of mobile devices from scratch, where should I begin? My testing pool hasn’t been updated for a long time; what criteria should I check to update it?”

In this blog post, we will offer basic answers to these essential questions. More particularly, this blog post will focus on the criteria to keep in mind while creating a pool of Android test mobile devices.

Indeed, finding a relevant testing set of Apple devices is easier, for several reasons.
For starters, the adoption rate of new versions of iOS is impressive: with the release of iOS7 in France, more than 50% of iPhones users installed it within a week (source). Now, 90% of Apple devices in the world are currently on iOS7 (source).
Also, new iOS versions tend to be compatible with most Apple devices. iOS 8 is a good example as it will be released on every Apple smartphone and tablet with the exception of iPhone 4/3GS/3G and iPad 1 (source).
More generally, Apple’s vertical integration strategy makes life easier for developers. The fact that Apple designs its products from the CPU to the operating system enables developers to create mobile applications in a monolithic environment: the list of different devices is short, and these devices have the same operating system and also a similar hardware architecture.
Consequently, if you are developing applications on iOS, there is no need to test them on every possible device. Testing them on each of the four screen sizes (iPhone 3x/4x, 5x, iPad, and iPad Mini) is enough. Just don’t forget to test them on iOS 6, to not forget this last 10% of users.

Conversely, Google has opted for a more horizontal strategy, focusing on the development of Android and applications / digital services. Therefore, Google gives device manufacturers free rein for designing smartphones and tablets that will be running on Android. The consequence is a significant fragmentation of the Android platform, with a host of entirely different devices (11868 different Android devices have been seen last year – source) running on several scattered versions of Android OS. However, Google has made a platform that gives a pretty good level of abstraction for developers, avoiding case-by-case testing. But in theory, to have certainty that an app work on all devices, you would have to test it on every single one! Obviously, this is impossible. But don’t panic: this article will help you build your pool of equipment in a way that enables you to guarantee to most users that your app is actually working.

What screen size and density should the devices on which I test have?

The first choice you have to make when it comes to get new mobile devices is about display features.
If you’re starting your testing set from scratch, it could be interesting to check out the screen sizes and density distribution of Android devices across the world (Google source). With those figures in mind, you will be able to create a testing pool that represents quite well the distribution of Android devices among the population. Then, by testing your apps on the most common devices’ displays, you will get the certainty of satisfying most of your users. Check out how these display categories are made with this little schema.*

If you just want to update your group of test devices, try to keep this in mind: a large number of Android users don’t have high-range devices. You don’t have to necessarily modernize your testing pool . Take actions to make sure that your apps are also working on small screen and low-resolution devices. Try to have at least one device of each size type: small, normal, large and xlarge. About density, widespread screen types nowadays are mdpi, hdpi, xhdpi and xxhdpi. Again, you should try to have one of each type and if possible, have various devices that mix different densities and screen sizes.

How powerful should my testing pool’s devices be?

Let’s keep talking about hardware technical specifications, and more particularly about how powerful your devices should be. This means how fast and efficient your devices’ SoC (System on a Chip, which includes device’s CPU and GPU) should be. Again, diversification is the word.

Having only powerful smartphones equipped with bleeding-edge SoC would be a huge mistake. Actually, you would be better off testing your apps on Android devices with average or low computing power. This way, when your apps are doing well on these devices, you become certain that you’re satisfying the average user of your apps and then that your apps will undoubtedly work well on faster devices.
No matter if your intent is to build a new set of equipment or just to update it, the main idea is always the same: be sure to have devices from all technical ranges.

Do I have to care about the OS tweaks and modifications added by manufacturers on Android phones?

You definitely have to. These modifications (including changes in the UI called skins and sometimes deeper changes in the OS) that smartphone manufacturers preinstall on top of the Android OS may change how your app works. Depending on the manufacturer and the way Android was modified on this phone, layout bugs may appear.
For instance, some OEMs install their own browser (running on a unique architecture). The consequences for developers are problematic: a single website will appear differently on each device with the tweak, and also, on each OS version.
Nevertheless, Google makes great efforts against fragmentation (source). Recently, device manufacturers have been forced to respect a few rules with their tweaks and skins. If they choose to not respect these rules, they automatically loose the ability to use Google services, such as Google Maps. For instance, manufacturers are compelled to install Google as search engine by default on their devices’ browsers or to preinstall ten of Google’s apps. Thanks to that, Google has managed to limit fragmentation among Android’s modifications.
To avoid bad surprises, be aware of the OEM’s modifications installed on the devices your app will be running.

Getting only devices running on stock Android UI to test your apps would be a bad solution: an app working perfectly well on stock Android won’t necessarily work as well with OEM modifications. Naturally, the best practice when it comes to testing your apps is to do it on the common skins: Samsung’s Touchwiz, HTC Sense, Sony Xperia’s UIs or LG Optimus UI. Again, you won’t be able to test your apps on every single OEM skin. By testing them on the four previously quoted skins along with stock Android, you’ll be sure to cover most users.

And what about the Android versions on my devices?

Now that we’ve talked about UIs, let’s take a deeper look into the OS particularities.
Google gives us interesting figures about the Android versions distribution across the world (Google source). If you’re just about to begin to build your testing pool, it might be interesting to get a pool following the global repartition in terms of OS versions. As you can see, the most widespread version of the Android OS is Jelly Bean (4.1-4.3). Thus, it is necessary to be certain that your app is working on at least these versions. After, you can diversify your testing pool by getting devices running on Ice Cream Sandwich (4.0) or KitKat (4.4) and then on older and more particular versions like Gingerbread (2.3.x), which represent only 14% of the global distribution in July 2014 (Google source). Nevertheless, experience indicates that usually, more problems come from the tweaks added by manufacturers than from the differences between OS versions.

Eventually, the main idea to keep in mind about OS and skin choices is that you should diversify manufacturers’ presence in your pool before diversifying OS versions.

Anything else I should know before I start to create/update my testing pool?

Before you begin to think about your pool of devices, ask yourself how exactly you are going to use this testing pool: what kind of apps are you going to develop? What users are you targeting? “Geek” users equipped with brand new high-range Android smartphones? Or average users who may have outdated low-range devices? Furthermore, be aware of the hardware features your app is going to need: sometimes these features are missing, even on high-range smartphones. Take a look at the issue Game Oven, a company editing mobile video games, had to face with its game Bounden. To work properly this game really needed a gyroscope, which is a common piece of hardware in current smartphones. Nevertheless they discovered that their game didn’t work properly on many Android devices as some of them didn’t have a gyroscope or had a very bad one…
You should also obtain information about the market you want to target, as its features can have a direct impact when creating a set of test devices. For example, Apple is the top manufacturer in the US, Xiaomi leads in China, and Micromax prevails in India. It’s important to target your tests and not shoot for the global market.

In a nutshell, try to keep these points in mind:

  • Almost 80% of smartphone users have Android devices, but only a minority have fresh high-range phones: stubborn modernization won’t be as efficient as you might think.
  • Android is a very fragmented platform, but you can increase your chances to satisfy the maximum number of users by diversifying your testing pool in terms of hardware, skins, and OS versions.
  • Always adapt this advice accordingly to the specificity of the mobile application you’re developing.

iOS 8 extensions

$
0
0

As we are counting days before the public release of iOS 8, we wanted to share some context and tips around App Extensions. An extension offers custom functionality of an app out to the whole system, extending it in new and meaningful ways. Once off-limits, new parts of the operating system are now open, providing fresh surface area for your app to touch. After an introduction to extensions in general, we’ll delve into the world Today widgets, which we believe will foster new and innovative ways of interacting with apps.

iOS 8 extensions

Extension points

An extension’s API is defined by a particular extension point. Extension points are points on the system for an extension to bind to. This could be the Notification Center or iOS keyboard. Each extension point is packaged within a system framework, meaning that third-party extension points are not allowed. An extension point defines policy such as launch characteristics, and presentation from within the host app.

Extension point Example extension
Today Get a quick update or perform a quick task in Today view of Notification Center (a Today extension is called a widget)
Share Post to a sharing website or share content with others
Action Manipulate or view content within the context of another app
Photo Editing Edit a photo or video within the Photos app
Document provider Provide access to and manage repository of files
Custom keyboard Replace the iOS system keyboard with a custom keyboard for use in all apps

Whether it’s translating text instantly within Safari or performing simple actions within Notification Center, extensions remove friction present in previous versions of iOS.

What is an Extension ?

An extension is meant to be streamlined and focused towards a singular task. However, it is important to realize that an app extension is not an app. Rather, an extension has its own code signature, set of entitlements, and creates its own binary. Each extension must be delivered within a containing app; an app that contains one or more extensions. When a user installs a containing app, the extension is also installed alongside it.

Life Cycle

An extension is almost always instantiated from within a host app. For example, a user finds a picture in Safari, hits the share button, and chooses an appropriate extension. Safari is the host app and thus defines the context in which the extension lives, but does NOT launch the extension directly. Rather, Safari talks to the Social Framework, who then discovers, loads, and presents the extension. The extension code is ran, and uses system instantiated communication channels to pass data. Once complete, Safari tears down the extension view and the system terminates the process.

Communication

While running, an extension only communicates with its host app. There is no direct communication between an extension and its containing app. In addition there is no communication between the host app and containing app. Indirect communication, however, is available using well-defined API’s. They may be used when an extension asks its containing app to open (openURL:), or when an extension wishes to use a shared resource container (NSUserDefaults).

The Today widget example

Today center widgets live in the Today view of notification center. They are designed to give a user quick access to information that’s important right now. Apple has used widgets in previous versions of iOS to give updates on weather, calendar events, or to mark a reminder as completed. A widget is meant to be used for quick updates or simple tasks (such as showing latest updates or the progress of some background work, polling the user). Performance is paramount for a widget, so reconsider if you want to create an extension that performs lengthy or intensive tasks.

Add Application Extension Target

When you’re ready to start creating your widget, simply add a new Application Extension target to your containing app. You’ll be left with a property list file (Info.plist), a view controller class, and a default user interface. You’ll now see “1 New Widget Available” in Notification Center, where you can add the widget by tapping “edit”.

Enable App Groups

In order to keep your widget and containing app in sync, you will need to create a shared data container. This is accomplished by enabling the App Group entitlement for both the containing app and extension.
In Xcode, go to the “Capabilities” tab for your containing app and enable “App Groups”. Select your Development Team as provisioning and add a new group. Repeat this process for the widget target, except this time you’ll use the already created App Group instead of making a new one.
Once done, this newly created App Group needs to be registered in the developer portal. An App ID for your containing app and widget need to be added to this group. If all is well, we can now use NSUserDefaults to share data between our containing app and extension.

Syncing content

To enable sharing between your containing app and widget using NSUserDefaults, use initWithSuiteName: to instantiate a new NSUserDefaults object, passing in the identifier of the newly create App Group. Whenever it’s appropriate, update this shared data container for use by your widget. For example:

// Create and share access to an NSUserDefaults object
NSUserDefaults * mySharedDefaults =
    [[NSUserDefaults alloc] initWithSuitName:@"com.example.domain.MyTodayWidget"];
//Use the shared user defaults object to update the user's account
[mySharedDefaults setObject:theAccountName forKey:"lastAccountName"];

Update the snapshot

The system will often take snapshots, cache them, and present them the next time your widget is presented. In order to make sure the snapshot is taken at the right time, conform to NCWidgetProviding and implement the widgetPerformUpdateWithCompletionHandler: method. Inside of this method, check to see if you have new content to be displayed and return the appropriate NSUpdateResult.

Design the UI

As space is limited for a widget, it’s important to use area in an efficient manner, while keeping user experience quick and focused. Ideally, your widget should look like it was designed by Apple. A widget’s width is constrained to the Today view, but height can be increased to display more content.

  • To adjust height, Apple recommends that you use Auto Layout constraints. However, you can also use preferredContentSize:.
  • To mimic the blur and vibrancy effects in Notification Center you can leverage UIVisualEffectView.

As a note, keyboard entry is not supported.

Performance and Other Considerations

At its heart, a widget is a view controller. This means that it will follow the same life cycle that developers are already used to (viewWillAppear, viewDidDisappear, etc). As performance is important to a widget, make sure you’re ready to display content by the time you’re returning from viewWillAppear. This means caching data and running expensive operations ASAP and in the background. As many widgets can be open at the same time, memory limits are aggressively enforced. Being a good citizen also means not blocking main run loops or hogging GPU resources.

Not all API’s are available for extensions. These API’s are marked with an “unavailability macro”, such as NS_EXTENSION_UNAVAILABLE. Apple defines these as “inappropriate for app extensions”. A noteworthy example of this is the shared application object.

Debugging

Debugging a widget in Xcode is quite similar to a normal app. Before running, choose the extension scheme in Xcode. You will then be prompted to choose a host app. In our case, “Today” will be the host. The debugger hasn’t been too consistent for beta versions of Xcode 6 and will often lose connection to the extension. It may be necessary to manually attach your extension process to the debugger by going to Debug > Attach to Process. It’s also a good idea to keep an eye on the debug gauges while running your extension.

Summary

Extensions are an exciting addition to iOS 8 that may have most visible impact in iOS 8. They’re meant to remove friction and make tasks faster and more enjoyable for all users. Extensions will provide intercommunication between apps while widgets turn notification center into a central hub for “at a glance” information and tasks.

App Makeover #03 – Snapchat

$
0
0

For this new episode, our design team chose to rethink the interface of the sharing ephemeral media application known as Snapchat.

Snapchat was launched in 2011 and now has over 82 million monthly active users, meaning more than 760 million photos and videos exchanged every month on the network.
Behind this success lies a simple and innovative concept of capturing and sharing media. If you’re not already familiar with the application, the peculiarity of Snapchat is its treatment of objects sent. Once the picture is taken, the user determines a playback time which limites the playing time of the Snap, before making it unavailable.

Our position

While the application offers a host of features such as chat, video conference, add photos filters, “story” (video and / or photo searchable for 24h), we only focused on sharing photos.
In this stylish exercise we propose to take a critical look at some ergonomic and graphic aspects.

The app

Navigation

Our first analysis aimed at the navigational elements of the main screens : home, my snaps, my friends.
On the illustration below we present the sensitive points.

Current Application : My Snaps, Home and My Friends screens

The two icons at the bottom of the homepage are schematic representations of the main navigation elements. This strong graphic bias implies to the first manipulation an against-productive action for the novice user : click to open and understand the screen behind. While they should allow a quick understanding of the action hidden behind this symbol.

On the pages “my snaps” and “friends”, the native back button is replaced by a camera icon. The idea makes sense because this one returns to the home page (shooting).
Nevertheless, when it comes to the transition from “My snaps” to “Home” screens, this shows inconsistency as the button is positioned opposite to the suggested translation (see animation below).

Animation of the 3 main application screens

Result

We present below our proposal next to the current application (version 7.0.5).

Log In / Sign Up

Log In / Sign Up screens

As it stands, the login screen requires two pages to connect to the application. We have chosen to combine these screens into one to quickly benefit from the application.

Home

Home page

Considering the following use case : capture and share an event to friends, starting with the camera is an absolute necessity to take a shot instantly. Therefore, it seems that the ergonomics of this screen is fully examined.

Unlikely to the current version where the most important elements were located at the bottom of the screen (“my snaps,” “shooting,” “my friends”), we suggest a different approach. It seems more appropriate to operate grouped by typology, with on the one hand navigational elements and on the other hand shooting tools.
Using the native components offered by the platform, user can spot himself easily. We propose to re-exploit the space allocated to the navigation bar to use it in that position and so to the top of the screen, entries “my snaps” and “my friends.”

My snaps

My snaps

We reworked the pictograms very sketchy for media sending and reception, in a more literal manner.

Then, to facilitate the browsing, we dimmed the snaps already seen, it brings out those untreated. Finally, we have shown the response to a snap with an arrow in the cell.

Viewing a snap

Viewing a snap

Currently, the user has to keep his finger on the screen to view the snap received ; however, the picture may be partially hidden. We propose a different approach : the “one click” more common in the case of use to get information, it offers at the same time to fully enjoy of the media. We illustrated the countdown with a progress bar that also serves as a button “quick response.”

Customizing a snap

Customizing a snap

Once the picture is taken, the user can customize his snap with some text, color and choose the display time.

Again, we wanted to classify the different elements by typology : 3 editing tools in the tool bar, and the two actions “cancel” (quit) and “send” (go to the next step) in the navigation bar.

Customizing a snap – Time, Colors, Text

Our work here :

  • a simplification of the timer, we avoid the swipe of the picker by offering a “one click” selection
  • the color selection tool was difficult to use : too small relative to the size of a finger. We simplify to 5 colors in hues of the application
  • the “text” feature works well, we keep the formatting as the current application proposes

Customizing a snap – Summary before sending

Send to…

Send to…

We standardize all the navigation bars of the application in green to keep consistency between the different screens.

We propose a layout a bit different, especially using profile pictures for contacts in order to give a more personal side to “friends.”

Recap

According to its popularity, it seemed interesting to us to study this application and to offer a different and efficient in-house version. We hope “Snapchaters” would appreciate our work and love to hear their feed-back on the topic.


Mastering blur and vibrancy with iOS 8

$
0
0

With iOS 7, Apple introduced blur in some native elements (UINavigation, UITabBar, UIAlertView). However developers could only mimick this effect outside those. Starting with iOS 8, native blur is available, and we will delve into UIVisualEffectView to master it.

Original image credits: Marac Andrev Kolodzinski/Caters News/Zuma

Blur

Having a view with a blurred background is now trivial with UIVisualEffectView. Simply use the initWithEffect: initializer and be careful to add the subviews to the contentView.

UIBlurEffect*effect=[UIBlurEffecteffectWithStyle:UIBlurEffectStyleLight];UIVisualEffectView*viewWithBlurredBackground=[[UIVisualEffectViewalloc]initWithEffect:effect];

Vibrancy

Along with blur, Apple introduced the notion of vibrancy. It is used to highlight an element inside a blurred view. Vibrancy does not fit in the classical model of layers drawn on top of each others to produce a final result. A UIVisualEffectView with a UIVibrancyEffect, no matter its position in the view hierarchy as long as it’s included in the contentView of its parent blurred UIVisualEffectView, will temper the blur to highlight the subviews in its contentView.

If you’re not certain how it looks like, just open your notification center and highlight one of its cells.

Working with the previous example, if you want to demo vibrancy you’ll do:

UIVisualEffectView*viewInducingVibrancy=[[UIVisualEffectViewalloc]initWithEffect:effect];// must be the same effect as the blur view
[viewWithBlurredBackground.contentViewaddSubview:viewInducingVibrancy];UILabel*vibrantLabel=[UILabelnew];// Set the text and the position of your label
[viewInducingVibrancy.contentViewaddSubview:vibrantLabel];

Original image credits: Zanza

Tuning vibrancy to create beautiful Today widgets

As mentioned earlier, the notification center is one of the most emblematic vibrancy use in iOS. Achieving a seemless integration of your widget cells highlighted state does require 2 tricks:

UIVibrancyEffect*effect=[UIVibrancyEffectnotificationCenterVibrancyEffect];UIVisualEffectView*selectedBackgroundView=[[UIVisualEffectViewalloc]initWithEffect:effect];selectedBackgroundView.autoresizingMask=UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth;selectedBackgroundView.frame=self.contentView.bounds;UIView*view=[[UIViewalloc]initWithFrame:selectedBackgroundView.bounds];view.backgroundColor=[UIColorcolorWithWhite:1.0falpha:0.5f];view.autoresizingMask=UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth;[selectedBackgroundView.contentViewaddSubview:view];self.selectedBackgroundView=selectedBackgroundView;

And if you’d like to know more about Today widgets, we warmly invite you to take a look at our dedicated blogpost.

How to perform a Google Glass screencast

$
0
0

After several hours of hard coding, your brand new Google Glass app is ready. It’s time for a demonstration! But wait… we’re talking about glasses, how is it possible?

Using Google Glass is a single-person experience. That is why a demonstration is a little bit tricky: you cannot see what the user wearing the glasses is doing and you can hardly help him.

But don’t worry! We have a solution for you.

Google Glass Screencast

In this article, we will show you how to duplicate the screen of your Google Glass onto your computer (or your Android device).

Note that you can use the exact same procedure for any other Android device.

Installation

Start by downloading the required tools.

  • Install the JDK and add it to your PATH variable.
  • Install the SDK Android and add the tools and platform-tools folders to your PATH variable.

We recommend Droid@Screen which is more user-friendly and provides a lot of useful tools: zoom, rotation…

Demo time

To start the screencast you have to:

  • Turn on debug mode on your Google Glass. The option can be found in the Device Info sub-menu in the Settings of your glasses.

Turn on debug mode

  • Connect the glasses to your computer through USB. A popup may appear asking you whether you want to trust the computer or not. “Allow” is the right answer!

Allow debugging from your computer

  • If you are using Droid@Screen, simply double-clic on the .jar file. If you are using Android Screen Monitor, open a terminal and enter the following command: java -jar asm.jar.

Screencast using Droid@Screen

Wireless!

If you want to be classy and get rid of the USB cable, it is possible! But you will need a few more terminal operations and the latency will probably be higher…

You need ADB (Android Debug Bridge) which you actually already have since it was shipped with the Android SDK. You can find ADB in the platform-tools directory.

  • Open a new terminal.
  • Plug the glasses to your computer through USB and use the adb devices command to check the connection.
  • Find the Google Glass’ IP address by typing adb shell netcfg in the terminal. The IP address (for example 10.0.0.12) should be accessible in the wlan0 section.
  • Start TCP mode using the adb tcpip 5555 instruction.
  • Launch connection using adb connect [Google Glass IP] (for example adb connect 10.0.0.12)
  • Unplug the Google Glass.

If you need to go back to USB mode, enter adb usb.

Using an Android device

If you want to screencast your Google Glass on an other Android device, typically a tablet, Google provides a very useful application : MyGlass.

Download the app, log in using the Google account associated with the glasses and follow the instructions to pair the Android device and the Google Glass. Then, launch the Screencast mode.

One more thing…

The app allows you to control the glasses from the tablet!

Ok Glass… demo time!

At the IoT block party, Mobile app is the MC

$
0
0

Connected objects and the Internet of Things clearly are trending topics. The world is already filled with almost 9 billions of connected devices. While fighting for shares in the smartphone industry, competitors are unveiling their connected products, seeking to claim their pioneer position in this highly valuable market.

We, at Applidium, see this current as a tremendous opportunity to enhance existing mobile services and imagine disruptive experiences.
As we started developing on Glass or smartwatches, we wondered what would the future of mobile apps in an Internet of Things world be and we would like to share our views with you today.

Connection in progress

A brief history of connected objects

The name “Internet of Things” was first used by Kevin Ashton (P&G) in 1999 to refer to the connectivity of RFID tags to the Internet. After minor tries, the first real connected object was the Nabaztag. This rabbit, introduced by Rafi Haladjian’s company Violet in 2005, quickly became a flagship item. Wi-Fi enabled, the Nabaztag can download weather forecasts or read the owner’s mails, along with being customizable. In 2006, Apple collaborated with the swoosh sport brand to launch Nike+iPod. A tag, placed in one of your shoes, recorded data from your running sessions and sent them to your iPod Nano. And in 2010, Parrot created a new entertaining experience with the AR Drone, controlled by a smartphone over Wi-Fi with onboard sensors and a camera, allowing augmented reality games. In its first years, the IoT market was already widely diversified.

A growing market

As you surely know the current trend of making connected things may become a large market and a great transformation in our habits.
From wristbands to lightbulbs, including weighing scales, smoke detectors or even clothes, every common object now finds its place in a connected world.
The IoT market has been quietly growing these past years and now seems ready to skyrocket. Within 2020, 50 billions of connected devices will spread over the world, and the market will be worth b$ 5,000. (Source)
The diversity of the connected objects seems limitless and will contribute to this expansion. Health, House appliance, Sport, Entertainment, every domain is concerned and they have only one thing in common : your mobile.

One phone to rule them all

The rise of the IoT market has been made possible thanks to the progress in the smartphone industry. Since 2007 and the introduction of the original iPhone, smartphones have invaded the world, constantly evolving and improving their characteristics, and especially their connectivity, with Bluetooth, Wi-Fi, LTE and NFC.
The war among the competitors has started a race for miniaturization, and opened the gates for small connected devices able to communicate efficiently with your phone.
The smartphone will become the gateway of your connected life. While having a deep look to the current landscape, we envisage 3 realistic scenarii, see below.

Smartphone is the middleman

The configuration panel

Many connected objects look simple and passive. Take as an example the screenless wirstbands such as Fitbit Flex, Jawbone Up or Withings Pulse. Those trendy objects, stuffed with attractive features and a lot a sensors, are useless without their dedicated app. The first step to do before you can use them is to configure them with your smartphone. The app allows you to create a profile, select the features you want to use and set an alarm for example. The software is also a continuity of the object, allowing you to see your statistics, the most famous being your number of steps. Thus, in spite of its ability to collect data on its own, the connected wirstband needs the smartphone to give you back those pieces of information, as well as to get OTA firmware updates. The link between the mobile and the object seems to be more than just a Bluetooth LE connection, they complete each other.

The remote control

The mobile also finds its use as a remote to control all of your connected objects. Imagine your phone as a universal, all-in-one, instantly configurable remote. From drones to doorlocks, including your smart thermostat or your lightbulbs, your phone pilots everything. Turn off the lights, adjust the temperature, check your security cameras or skip the song on your connected speakers, all of this with your smartphone. With Homekit, Apple enhances this experience, turning your phone into a Siri-controlled remote that does what you say.
Moreover, linked with beacons – other small connected devices – the possibilities are endless : approach your house and the door unlocks, the lights go on and your smart house comes to life.

The connection hub

When it comes to wearables, such as watches or glasses, decisions have to be made to link efficiency with design, price and autonomy. With Glass, Google made a clear choice: the wearable is a continuity of the mobile and not a device of its own. In that case, Glass uses the resources offered by the phone, such as its connectivity (LTE or Wi-Fi), its powerful processor and its storage capacity. Thanks to that, Glass can guide you through GPS or record videos. At Applidium the Glasswares we design must enhance the experience “taking the best of both worlds”. The phone as a central hub could also let different objects communicate between them. Thus, your Glass could alert you that your dog left the house, while the humidity in the kitchen is too high. But when you look at the actual smartwatches, many differences can be seen. Most of them follow the Glass “phone-dependant” model, but a few of them (such as the Samsung Gear S) try to overpass the mobile, with nano-SIM slot and storage capacity. There comes a dilemma: should the watch be as thin as possible and take the best out of the smartphone, or should it be autonomous and act as a stand-alone device ?
Apple might have just brought the answer to that question : hybrid ! With its simply called Apple Watch, the Cupertino firm makes a giant leap for the IoT. Although many things still have to be specified, the next trendy object can act as a stand-alone sensor-filled sport device, with storage capacity for music and photos, or be a continuity of the iPhone, using its connectivity for GPS, calls or SMS, NFC for Apple Pay, or third-party apps.
We might see a pre and post Apple Watch era.

IoT-focused design

Smart as a double-entendre

Conceiving an app for a connected device might be trickier than for a mobile. The first thing to do is to define precisely the role of the smartphone. This approach will depend on the needs, as a glassware must be more epurated than a configuration panel for a house surveillance kit. At Applidium, we design mobile apps usually following iOS and Android Guidelines in order to give you the best experience, so when it comes to develop a software for a connected object, the same policy is followed in order to create a breakthrough experience and bring something useful, easy-to-use and meet with a user’s expectations.

Early-stage development

As the IoT is quite recent, wearable apps still are at an experimental stage. Like the first apps in the App Store, they don’t take full advantage of the possibilities given by the connected world surrounding us. With Glass and Android Wear guidelines, Google took a first step in the standardization of the apps. The GDK (Glass SDK) looks like an streamlined iteration of Android, but the new APIs (application/object) still need a lot of refinements. By expanding its Explorer Program for Glass, Google seems to be filling this gap and improving the connected experience.
With WatchKit, Apple will bring guidelines to fit the new UI of its long-awaited Apple Watch. This system seems more mature than Android Wear, allowing many ways to interact, through the Digital Crown, the force-haptic-capable screen, and through third-party apps. Although the philosophy behind Watchkit truly differs from Android Wear, they have some similarities and designing apps for them is a continuity of a developer’s work.

Retro-skills

Combining a simple interface with powerful features is a challenge we are willing to accept. The skills and experience we have acquired give us a solid basis in order to design and develop for smart objects. At Applidium, we are transposing our mastered knowledge of server/smartphone communications in order to create enhanced smartphone/object interactions, thinking the user experience as a continuum.
Challenge accepted.

Conclusion

Despite of the kind of hype overload, we, at Applidium, wondered what would be the role of mobile apps in a smart world filled with connected objects. The interactions between the mobile and the objects are crucial and during the years to come, wearables and other smart objects will flourish only if the transition is well executed. In fact, any kind of object will be smart within years.
As for today, mobile apps stand at the center of an IoT world, providing a smooth continuity amongst all the devices, what would be the next step?

With the multiplication of mobiles and sensors-filled objects, coupled with the expansion of the Cloud computing and Big Data Processing, it won’t be long before the IoT becomes the IoE (Internet of Everything).

Core Data Features in iOS 8 (Part 1)

$
0
0

Last June, Apple introduced iOS 8. And every new version of iOS comes with a bunch of new APIs. Browsing the full list of API differences between iOS 7 and iOS 8, we came across Core Data new features: batch update requests and asynchronous fetch requests. In this article we will focus on the first type of requests.

Batch update requests

The first interesting feature in the new Core Data APIs is the introduction of batch update requests. Before iOS 8, there was no built-in way to update all your Core Data records to the same value. You had to do it manually and as we will see a little further, this could be time consuming. Now Core Data provides a generic and efficient way to perform such operation.

Apple introduced a new subclass of NSPersistentStoreRequest called NSBatchUpdateRequest. This class is used to perform a batch update in a persistent store, without loading any data into memory.

If we look in detail at the API, we notice that NSBatchUpdateRequest instances have a property called propertiesToUpdate. This property is used to specify the attributes of the entity to update, along with their final values.

There is also a resultType property that is used to set the type of result that should be returned from the request. It can take three different values:

  • NSStatusOnlyResultType (by default): the request does not return anything
  • NSUpdatedObjectIDsResultType: the request returns the object IDs (NSArray of NSManagedObjectID) of the rows that were updated
  • NSUpdatedObjectsCountResultType: the request returns the number of rows that were updated

In practice: iOS 7 VS iOS 8

Let’s take an example to see the difference. Imagine we have a set of articles objects, and each article has a read property that indicates if the current user has read the article.

Let’s assume we want to implement a feature that mark all articles as read.

In iOS 7, we would have written something like this:

-(void)markAllAsRead{// Create fetch request for Article entity
NSFetchRequest*fetchRequest=[NSFetchRequestfetchRequestWithEntityName:@"Article"];// Execute the fetch request and get the array of articles
NSArray*articles=[self.managedObjectContextexecuteFetchRequest:fetchRequesterror:NULL];// Update the managed objects one by one
for(Article*articleinarticles){article.read=@YES;}// Save the context to persist changes
NSError*error=NULL;[self.managedObjectContextsave:&error];if(error){// Handle the error
}}

Very straightforward. We fetch all the articles, iterate through all of them and set the read property to @YES. Finally we save the context to persist the data.

In iOS 8, though, we can now use an NSBatchUpdateRequest to perform the same action.

-(void)markAllAsRead{// Create the batch update request for Article entity
NSBatchUpdateRequest*batchUpdateRequest=[NSBatchUpdateRequestbatchUpdateRequestWithEntityName:@"Article"];// Specify the properties to update and their final values
batchUpdateRequest.propertiesToUpdate=@{@"read":@YES};// Set up the result of the operation
batchUpdateRequest.resultType=NSUpdatedObjectsCountResultType;// Execute the request
NSError*error=NULL;NSBatchUpdateResult*result=(NSBatchUpdateResult*)[self.managedObjectContextexecuteRequest:batchUpdateRequesterror:&error];if(!result){// Handle the error
}else{// result is of type NSUpdatedObjectsCountResultType
NSAssert(result.resultType==NSUpdatedObjectsCountResultType,@"Wrong result type");// Handle the result
NSLog(@"Updated %d objects.",[result.resultintValue]);}}

Let’s look closer at what is happening here.

  1. First we create the NSBatchUpdateRequest with the entity we want to update.
  2. We specify the properties to update via the propertiesToUpdate property. In our case we want to set the read property to @YES for all the articles.
  3. We set the result type of the request to NSUpdatedObjectsCountResultType
  4. Finally, we pass the request to the store with the method [NSManagedObjectContext executeRequest:error:] that returns a NSPersistentStoreResult. This result is cast to NSBatchUpdateResult to access the number of rows that were updated.

Performance

The two examples above perform the same action, but there is a big difference between them in term of performances. Let’s focus on what happened under the hood in both cases.

The iOS 7 example starts to load all the articles into memory with an NSFetchRequest. In practice that means an SQLSELECT request is sent to the store to retrieve all the articles. Then Core Data loads the articles into memory, creating NNSManagedObject where N is the number of articles fetched. This step can be time consuming for large data sets.
Then all the managed objects are updated to set the read property to YES. Changes are persisted to the database when we save the context. Actually, NSQLUPDATE requests are generated, one request for each article.
In total, N + 1SQL requests were sent to the persistent store.

However, the iOS 8 example do not use an NSFetchRequest. There is no SELECT request and no memory overhead. Instead, performing an NSBatchUpdateRequest send only one UPDATESQL request to the store that takes care of updating all the records.
In total, only one request was sent to the persistent store.

Using batch update request avoids sending N requests to the store and creating N managed objects into memory. To get an idea of the overhead it represents, setting up a simple benchmark shows that updating 1000 articles is about 10 times faster with a batch request than with a classic fetch request.

Conclusion

Using NSBatchUpdateRequest is a great improvement in terms of performance because it will not load any content in the current context and will only deliver the request to the store. That means your code relies directly on the database to update the records, and not on the NSManagedObject objects anymore.

Note. As batch update requests do not create managed objects, there are no validations performed when you update your records. It’s up to you to verify in advance that the values are correct when passed to the store.

Hacking the Navigo

$
0
0

If you ever use public transportation in Paris, you might have noticed the apparition of a slightly different Navigo pass since mid-January. This new pass stars a brand-new Philippe Starck design and first and foremost has a built-in NFC chip which is compatible with most Android phones (for the iPhone 6, the NFCAPI is currently exclusively reserved for Apple Pay).

This mobile-friendly feature has inevitably raised our interest, so we decided to give it consideration. How do those cards work ? What pieces of information can we find on them ?
We realised that the Navigo pass is a smartcard obeying the electronic ticketing system Calypso and more particularly the INTERCODE standard. Therefore, following those norms, we can have reading access to some files within the card.
Among those files, several pieces of information are available. It is possible to access a validation log containing data on the three last “punches” of the Navigo pass. Besides, another file stores details on the owner’s subscriptions (duration, areas of validity, …). Nonetheless, no personal piece of information concerning the owner of the pass can be read.

Following these findings, we conceived a demo app called Ticket Puncher for Android. It allows the user to see the content of any Navigo card surrounding the device. To do this, just put a Navigo pass against your device for it to be detected and analyzed.
Following that, the app will display information on the last three validations: date, time, type of transport (subway, tramway, bus,…), line and station (only for subway and RER for the latter two).
Interestingly, all this readable intel is not encrypted and we just use a lookup table for the subway stations.

We believe that the compatibility of these cards with smartphones is a huge step forward. In fact, it paves the way for a wide range of applications that were previously technically impossible. As an example, we could consider to recharge the Navigo card through our mobile rather than queuing at the counter at the beginning of each month. Even better, we could create bounds between the user’s pass and a mobile solution for travellers in order to personalize the features of the app: favourite stations, customized traffic alerts, network consumption data…
Last but not least, thanks to the introduction of the host-based card emulation with Android 4.4, we can even imagine having a Navigo app replacing the card and using the public transportation network with our smartphone.
We don’t doubt that more original user cases can be considered ;-)
Do not hesitate to ask us for a demo or to give us your impressions on Twitter @Applidium.

Viewing all 64 articles
Browse latest View live