Fork me on GitHub

Going deeper into deep learning

May 15, 2016

After more than half a year, we are finally making a release! You can obtain the new version 1.2 at the usual places on GitHub and the Maven Central Repository for JavaCPP, JavaCPP Presets, JavaCV, ProCamCalib, and ProCamTracker. For Scala users, Lloyd Chan has also contributed sbt-javacpp and sbt-javacv, offering them easy-to-use plugins for sbt. Thanks to Vince Baines, this release also contains a few binaries for the linux-armhf platform, which work on most Raspberry Pi devices, among others. We also hope to have continuous integration (CI) set up before long to provide a larger selection of prebuilt binaries on all platforms for non-release versions as well.

With regards to deep learning, we realized last year that JavaCPP had the characteristics of something that was in demand especially in that field, that is to say an easy way to access native libraries from an efficient platform like Java: Java meets Caffe, deep learning in perspective. Since then, Samuel has switched jobs and now works for Skymind, integrating JavaCPP, the JavaCPP Presets, and JavaCV into ND4J and Deeplearning4j, as well as pursuing other avenues, such as maintaining not only bindings for Caffe and cuDNN, but also for MXNet and TensorFlow, among others. Many thanks to Adam Gibson and Chris Nicholson for their trust! I am sure we will achieve great things together.

That said, to attain accuracies higher than traditional methods, deep learning requires a lot of data. That makes it a good candidate for data processing in big data applications. Naturally, they consume large amounts of memory, but Java cannot access arrays larger that can be indexed with a 32-bit int variable. This is an inherit limitation of the JVM. Given that Hadoop on Java is the de facto standard when it comes to big data applications, it makes sense to have a solution to that limitation. To support 64-bit indexing, we have thus extended the position, limit, and capacity fields of the Pointer class to long. Moreover, since standard NIO buffers do not support long indexing, we provide a new backend for the indexer package using sun.misc.Unsafe. Indexing memory with long variables represents a fundamental shift in the API, so it might break some existing code, but nothing too dramatic, we hope. On the brighter side, performance on 64-bit architectures is not affected.

Indexer also now implements AutoCloseable, to obtain the same benefit as with Pointer and try-with-resources constructs. For applications that cannot take advantage of this for memory management purposes, Pointer now also tracks the amount of memory allocated as reported by the capacity field. This does not work for most native libraries, more work would be required to query memory consumption from the operating system, but it works when allocating arrays of simple types with allocateArray(). Once memory consumption tracked this way reaches Pointer.maxBytes, the allocator will do its best to reclaim memory by calling System.gc(), waiting a bit, and retrying a few times in a loop.

That about covers the essential changes in this release, but other things have been fixed and updated too, so we invite you to check the changelogs and to contact us through the mailing list from Google Groups, issues on GitHub, or the chat room at Gitter, for any questions that you may have. Together, let’s make the future happen!

Comments

To add a comment, please edit the comments file and send a pull request!