Thanks for posting about this! I’ve been curious about LIG-Aikuma for quite some time now but have never used it. I think one reason I never tried it is that I have never worked with a community which has widespread access to smartphones (and mobile internet).
Another concern though is that it isn’t designed to produce audio recordings which meet modern standards for archiving, and this is actually an issue for audio-visual data collection with mobile devices more generally. You can of course get a microphone such as the Rode Smartlav+, but even with these mics you will get varying results depending on the device you are using. If each person creating recordings needs special equipment then it minimizes the crowd-sourcing potential of a platform like LIG-Aikuma, and this doesn’t even account for capturing video (which many argue is crucial for capturing gesture and other aspects of the visual mode).
I imagine it is a good platform for community-oriented initiatives such as language maintenance, though, where the focus is on promoting community engagement and the creation of resources which meet the needs of the community.
-If you have funds for small laptops, your collaborators can use ELAN to transcribe/translate (this is what @Andrew_Harvey and I have done with our most recent project).
-If you are restricted to collaborators using smartphones, it is possible for them to transcribe/translate using a basic spreadsheet app. The challenge here is how to link the transcriptions/translations with audio segments, and in the past I have done this by segmenting the audio myself and exporting individual audio files of each segment, which then can be listened to on loop in an audio player app such as VLC while transcribing in the spreadsheet app.
-Perhaps you could try using LIG-Aikuma, but only with recordings which were produced separately with audio recorders.