将原始PCM数据转换为RIFF波

2022-07-04 00:00:00 binary speech-recognition audio java

我正在尝试将原始音频数据从一种格式转换为另一种格式,以便进行语音识别。

  • 从Discord服务器以20ms块的格式接收音频:48Khz, 16-bit stereo signed BigEndian PCM
  • 我使用CMU's Sphinx进行语音识别,它将音频作为RIFF (little-endian) WAVE audio, 16-bit, mono 16,000Hz中的InputStream
音频数据在byte[]中接收,长度3840。该byte[]数组包含上述格式1的音频的20ms。这意味着1秒的音频是3840 * 50,也就是192,000。这就是每秒192,000个样本。这是有意义的,48KHz采样率,乘以2(96K采样),因为一个字节是8比特,我们的音频是16比特,并且是立体声的另外两倍。所以48,000 * 2 * 2 = 192,000

所以每次收到音频包时,我首先调用此方法:

private void addToPacket(byte[] toAdd) {
    if(packet.length >= 576000 && !done) {
        System.out.println("Processing needs to occur...");
        getResult(convertAudio());
        packet = null; // reset the packet
        return;
    }

    byte[] newPacket = new byte[packet.length + 3840];
    // copy old packet onto new temp array
    System.arraycopy(packet, 0, newPacket, 0, packet.length);
    // copy toAdd packet onto new temp array
    System.arraycopy(toAdd, 0, newPacket, 3840, toAdd.length);
    // overwrite the old packet with the newly resized packet
    packet = newPacket;
}
这只会将新数据包添加到一个大字节[]上,直到该字节[]包含3秒的音频数据(576,000个样本,或192000*3)。3秒的音频数据足以(只是猜测)检测用户是否说了机器人的激活热词,如"嘿,电脑。"下面是我如何转换声音数据:

    private byte[] convertAudio() {
        // STEP 1 - DROP EVERY OTHER PACKET TO REMOVE STEREO FROM THE AUDIO
        byte[] mono = new byte[96000];
        for(int i = 0, j = 0; i % 2 == 0 && i < packet.length; i++, j++) {
            mono[j] = packet[i];
        }

        // STEP 2 - DROP EVERY 3RD PACKET TO CONVERT TO 16K HZ Audio
        byte[] resampled = new byte[32000];
        for(int i = 0, j = 0; i % 3 == 0 && i < mono.length; i++, j++) {
            resampled[j] = mono[i];
        }

        // STEP 3 - CONVERT TO LITTLE ENDIAN
        ByteBuffer buffer = ByteBuffer.allocate(resampled.length);
        buffer.order(ByteOrder.BIG_ENDIAN);
        for(byte b : resampled) {
            buffer.put(b);
        }
        buffer.order(ByteOrder.LITTLE_ENDIAN);
        buffer.rewind();
        for(int i = 0; i < resampled.length; i++) {
            resampled[i] = buffer.get(i);
        }

        return resampled;
    }

最后,尝试识别讲话:

private void getResult(byte[] toProcess) {
    InputStream stream = new ByteArrayInputStream(toProcess);
    recognizer.startRecognition(stream);
    SpeechResult result;
    while ((result = recognizer.getResult()) != null) {
        System.out.format("Hypothesis: %s
", result.getHypothesis());
    }
    recognizer.stopRecognition();
}

我遇到的问题是CMUSphinx没有崩溃或提供任何错误消息,它只是每隔3秒提出一个空的假设。我不确定为什么,但我猜我没有正确转换声音。有什么主意吗?如有任何帮助,将不胜感激。


解决方案

因此,实际上有一个更好的内部解决方案来转换来自byte[]的音频。

以下是我发现非常有效的方法:

        // Specify the output format you want
        AudioFormat target = new AudioFormat(16000f, 16, 1, true, false);
        // Get the audio stream ready, and pass in the raw byte[]
        AudioInputStream is = AudioSystem.getAudioInputStream(target, new AudioInputStream(new ByteArrayInputStream(raw), AudioReceiveHandler.OUTPUT_FORMAT, raw.length));
        // Write a temporary file to the computer somewhere, this method will return a InputStream that can be used for recognition
        try {
            AudioSystem.write(is, AudioFileFormat.Type.WAVE, new File("C:\filename.wav"));
        } catch(Exception e) {}

相关文章