I’m in San Francisco for the JavaOne conference.
If you want to meet for a coffee and discuss anything Android, just let me know (ivan@mindtherobot.com)
In my previous article I outlined the stages you need to go through if you want to manually decode WAVs to PCM to play them on an AudioTrack. I promised to show how to do the same for MP3s and this is what this post is going to be about.
Again, the use case is more common than you might think. The only way you can play an MP3 file via direct Android API is MediaPlayer which is heavyweight, slow and presents only high-level API. If you need to mix or modify audio streams or manage them with low latency, you are on your own. But I will try to help you right now.
If you’ve managed to hack around the various issues that AudioTrack has, then you are probably enjoying its benefits, such as low latency (in the STATIC mode), ability to generate audio on the fly (in the STREAM mode) and the wonderful ability to access and modify raw sound data before you play it.
However, the problem now is getting that data from a source. Many applications that need to use AudioTrack do not generate PCM audio from scratch (an example of an app that does that would be Ethereal Dialpad and other similar apps). You will probably encounter the need to load and use existing audio samples from file sources such as WAV or MP3 files.
Don’t expect to be able to use MediaPlayer facilities to decode raw audio from WAVs and MP3s. Although MediaPlayer can play those files pretty well, its logic is almost entirely contained in the native layer and there is no option for us to plug in and use the decoders for our own purposes. Thus, we have to decode PCM from audio files manually.
In this article I will cover WAV files and in the next one we will get advanced and read audio from MP3s.
Thomas Edison and one of his early phonographs
Lately I’ve been digging into Android audio APIs. Earlier I wrote an introductory article that describes the three available APIs for WiseAndroid. Now this article assumes you are familiar with AudioTrack, SoundPool and MediaPlayer at the basic level.
What I want to present in this post is my experience with the existing audio APIs on Android, including the issues and problems I personally faced. I will also shortly cover OpenSL ES, the standard that is expected to be supported in one of the upcoming Android releases.
After I had posted the original article about making a vintage Android thermometer, it became popular very quickly and I was really surprised how useful some people found it. The code I wrote for that article was meant as an example, a draft of what you could really do using the techniques described in the post.
Freddy Martens from ATS Tech Lab took that code, improved it and modified it for his needs while creating several industrial automation components based on the improved code of Vintage Thermometer. You can have a look at them here. I recommend reading other articles on their blog as well and keeping an eye on it.
I don’t know about you but I have an emotional attachment to old platforms and APIs. They are often sweet and nice in their primordial simplicity, even with their well-understood limitations. Even when they become obsolete, we developers still remember and miss them.
In case of Android, version 1.5 / API level 3 is technically obsolete but is far from being extinct from the device market. In fact, as of the day I write this article, it is one of the most widely installed Android versions on the market. (And if you add the not-so-different 1.6, they take almost half of the market together.) While subsequent versions brought us a lot of goodies in various areas of the platform API, 80% or more of all functionality our apps need today is present in 1.5.
Sure, there are cases when your app just isn’t meant to run on 1.x. For example, its core functionality is based on the APIs that are absent from API level 3 (e.g. you’re making a live wallpaper), or when you don’t want to support old devices for performance reasons, or due to a marketing decision. However, in many cases what makes developers raise the minSdkVersion bar is the need to use some of the 1.6+ APIs, such as the new contact management facilities, signal strength detection or better animation support.
The good news I’m trying to bring in this article – to those who aren’t yet familiar with this trick – is that you can actually use the new API yet keep minSdkVersion at 3. This trick might not work in all cases, and might look like a hack, but it will work for many situations like this – and who knows, might allow your app to reach out to its grateful, ready-to-pay customers who still use 1.5.
Most Android developers start their learning from pure Java, Android SDK based apps. While everyone is aware that there is the NDK (the Native Development Kit), they don’t face the need to use it. Since NDK is distributed separately from the SDK, including documentations and samples, you are not likely to get familiar with NDK before you actually try it as a solution to one of your development challenges.
Because of that, many people think of NDK as of “black magic” of Android development. Many developers who are not familiar with NDK think it is 1) complex to understand and use, and at the same time a lot of developers will think it is a sort of a 2) silver bullet that can solve any problem that can’t be solved with SDK.
Well, both opinions are rather wrong, as I hope to show further in this post. Although it does have a maintenance cost and does add technical complexity to your project, NDK is not difficult to install or use in your project. And, while there are cases where NDK will be really helpful for your app, NDK has a rather limited API that’s mostly focused on several performance-critical areas, such as:
In this tutorial we will take our basic Android development environment that we created in one of the previous articles and add NDK support to it. We will also create a basic skeleton project that uses NDK that you can use as the foundation for your NDK-powered apps.