Is it possible to create UI elements with the NDK ? - lack of specs in Android docs Is it possible to create UI elements with the NDK ? - lack of specs in Android docs android android

Is it possible to create UI elements with the NDK ? - lack of specs in Android docs


Sure you can. Read up on calling Java from C(++), and call the respective Java functions - either construct UI elements (layouts, buttons, etc.) one by one, or load an XML layout. There's no C-specific interface for that, but the Java one is there to call.

Unless it's a game and you intend to do your own drawing via OpenGL ES. I'm not sure if you can mix and match.

In a NativeActivity, you can still get a pointer to the Java Activity object and call its methods - it's the clazz member of the ANativeActivity structure that's passed to your android_main as a parameter, via the android_app structure. Take that pointer, take the JNIEnv* from the same, and assign a layout.

How will this interoperate with OpenGL drawing, I'm not sure.

EDIT: about putting together your own input processing. The key callback is onInputEvent(struct android_app* app, AInputEvent* event) within android_app structure. Place your callback there, Android will call it whenever appropriate. Use AInputEvent_getType(event) to retrieve event type; touch events have type AINPUT_EVENT_TYPE_MOTION.

EDIT2: here's a minimum native app that grabs the touch events:

#include <jni.h>#include <android_native_app_glue.h>#include <android/log.h>static int32_t OnInput(struct android_app* app, AInputEvent* event){    __android_log_write(ANDROID_LOG_ERROR, "MyNativeProject", "Hello input event!");    return 0;}extern "C" void android_main(struct android_app* App){    app_dummy();    App->onInputEvent = OnInput;    for(;;)    {         struct android_poll_source* source;         int ident;         int events;         while ((ident = ALooper_pollAll(-1, NULL, &events, (void**)&source)) >= 0)         {             if(source != NULL)                 source->process(App, source);             if (App->destroyRequested != 0)                     return;         }    }}

You need, naturally, to add a project around it, with a manifest, Android.mk and everything. Android.mk will need the following as the last line:

$(call import-module,android/native_app_glue)

The native_app_glue is a static library that provides some C bridging for the APIs that are normally consumed via Java.

You can do it without a glue library as well. But then you'll need to provide your own function ANativeActivity_onCreate, and a bunch of other callbacks. The android_main/android_app combo is an interface defined by the glue library.

EDIT: For touch coordinates, use AMotionEvent_getX/Y(), passing the event object as the first parameter and index of the pointer as the second. Use AMotionEvent_getPointerCount() to retrieve the number of pointers (touch points). That's your native processing of multitouch events.

I'm supposed to detect the [x,y] position everytime, compare it to the location of my joystick, store the previous position, compare the previous position and the next one to get the direction ?

In short, yes, you are. There's no builtin platform support for virtual joysticks; you deal with touches and coordinates, and you translate that into your app's UI metaphor. That's pretty much the essence of programming.

Not "everytime" though - only when it's changing. Android is an event-driven system.

Now, about your "I want it on the OS level" sentiment. It's WRONG on many levels. First, the OS does not owe you anything. The OS is what it is, take it or leave it. Second, unwillingness to extend an effort (AKA being lazy) is generally frowned upon in the software community. Third, the OS code is still code. Moving something into the OS might gain you some efficiency, but why do you think it will make a user perceptible difference? It's touch processing we're talking about - not a particularly CPU intensive task. Did you actually build an app, profile and find its performance lacking? Until you do, don't ever guess where the bottleneck would be. The word for that is "premature optimization", and it's something that everyone and their uncle's cat would warn you against.


Rawdrawandroid shows nothing is impossible. Using that, you can write UI and handle UI event entirely in C/C++ .

Note: No java, classes.dex, byte-code would be used.

Sample App: https://play.google.com/store/apps/details?id=org.cnlohr.colorchord

Please read how rawdraw developer addresses common situations:

With this framework you get:

  • To make a window with OpenGL ES support
  • Accelerometer/gyro input, multi-touch
  • An android keyboard for key input
  • Ability to store asset files in your APK and read them with AAssetManager
  • Permissions support for using things like sound. Example in https://github.com/cnlohr/cnfa
  • Directly access USB devices. Example in https://github.com/cnlohr/androidusbtest

NDK UI is possible to write other than in java.

A little bit of this also has to do to stick it to all those Luddites on the internet who post "that's impossible" or "you're doing it wrong" to Stack Overflow questions... Requesting permissions in the JNI "oh you have to do that in Java" or other dumb stuff like that. I am completely uninterested in your opinions of what is or is not possible. This is computer science. There aren't restrictions. I can do anything I want. It's just bits. You don't own me.

You access android api from C, same like java can:

P.S. If you want a bunch of examples of how to do a ton of things in C on Android that you "need" java for, scroll to the bottom of this file: https://github.com/cntools/rawdraw/blob/master/CNFGEGLDriver.c - it shows how to use the JNI to marshall a ton of stuff to/from the Android API without needing to jump back into Java/Kotlin land.