1. Overview
1.概述
In this article, we’ll see how to capture a microphone and record incoming audio in Java to save it to a WAV file. To capture the incoming sound from a microphone, we use the Java Sound API, part of the Java ecosystem.
在这篇文章中,我们将看到如何在Java中捕捉麦克风和记录传入的音频,并将其保存为WAV文件。为了捕捉麦克风传来的声音,我们使用Java Sound API,这是Java生态系统的一部分。
The Java Sound API is a powerful API to capture, process, and playback audio and consists of 4 packages. We’ll focus on the javax.sound.sampled package that provides all the interfaces and classes needed to capturing incoming audio.
Java Sound API是一个强大的API,用于捕获、处理和播放音频,由4个包组成。我们将重点讨论javax.sound.sampled包,它提供了捕获传入音频所需的所有接口和类。
2. What Is the TargetDataLine?
2.什么是TargetDataLine?
The TargetDataLine is a type of DataLine object which we use the capture and read audio-related data, and it captures data from audio capture devices like microphones. The interface provides all the methods necessary for reading and capturing data, and it reads the data from the target data line’s buffer.
TargetDataLine是一种DataLine对象,我们使用它来捕获和读取与音频相关的数据,它从麦克风等音频捕获设备中捕获数据。该接口提供了所有读取和捕获数据所需的方法,它从目标数据线的缓冲器中读取数据。
We can invoke the AudioSystem’s getLine() method and provide it the DataLine.Info object, which provides all the transport-control methods for audio. The Oracle documentation explains in detail how the Java Sound API works.
我们可以调用AudioSystem的getLine()方法,并向其提供DataLine.Info对象,该对象提供了音频的所有传输控制方法。Oracle 文档详细解释了 Java Sound API 的工作原理。
Let’s go through the steps we need to capture audio from a microphone in Java.
让我们来看看在Java中从麦克风捕捉音频所需的步骤。
3. Steps to Capture Sound
3.捕捉声音的步骤
To save captured audio, Java supports the: AU, AIFF, AIFC, SND, and WAVE file formats. We’ll be using the WAVE (.wav) file format to save our files.
为了保存捕获的音频,Java支持。AU, AIFF, AIFC, SND, 和 WAVE 文件格式。我们将使用WAVE(.wav)文件格式来保存我们的文件。
The first step in the process is to initialize the AudioFormat instance. The AudioFormat notifies Java how to interpret and handle the bits of information in the incoming sound stream. We use the following AudioFormat class constructor in our example:
这个过程的第一步是初始化AudioFormat实例。AudioFormat通知Java如何解释和处理传入声音流中的信息位。在我们的例子中,我们使用下面的AudioFormat类构造函数。
AudioFormat(AudioFormat.Encoding encoding, float sampleRate, int sampleSizeInBits, int channels, int frameSize, float frameRate, boolean bigEndian)
After that, we open a DataLine.Info object. This object holds all the information related to the data line (input). Using the DataLine.Info object, we can create an instance of the TargetDataLine, which will read all the incoming data into an audio stream. For generating the TargetDataLine instance, we use the AudioSystem.getLine() method and pass the DataLine.Info object:
之后,我们打开一个DataLine.Info对象。这个对象持有与数据线(输入)有关的所有信息。使用DataLine.Info对象,我们可以创建一个TargetDataLine实例,它将把所有输入的数据读入音频流。为了生成TargetDataLine实例,我们使用AudioSystem.getLine()方法并传递DataLine.Info对象。
line = (TargetDataLine) AudioSystem.getLine(info);
The line is a TargetDataLine instance, and the info is the DataLine.Info instance.
line是一个TargetDataLine实例,而info是DataLine.Info实例。
Once created, we can open the line to read all the incoming sounds. We can use an AudioInputStream to read the incoming data. In conclusion, we can write this data into a WAV file and close all the streams.
一旦创建,我们就可以打开线路,读取所有传入的声音。我们可以使用一个AudioInputStream来读取传入的数据。最后,我们可以把这些数据写进WAV文件,并关闭所有的流。
To understand this process, we’ll look at a small program to record input sound.
为了理解这个过程,我们来看看一个记录输入声音的小程序。
4. Example Application
4.应用实例
To see the Java Sound API in action, let’s create a simple program. We will break it down into three sections, first building the AudioFormat, second building the TargetDataLine, and lastly, saving the data as a file.
为了看看Java Sound API的运作情况,让我们创建一个简单的程序。我们将把它分成三个部分,首先建立AudioFormat,其次建立TargetDataLine,最后将数据保存为文件。
4.1. Building the AudioFormat
4.1.构建AudioFormat
The AudioFormat class defines what kind of data can be captured by the TargetDataLine instance. So, the first step is to initialize the AudioFormat class instance even before we open a new data line. The App class is the main class of the application and makes all the calls. We define the properties of the AudioFormat in a constants class called ApplicationProperties. We build the AudioFormat instance bypassing all the necessary parameters:
AudioFormat类定义了什么样的数据可以被TargetDataLine实例捕获。因此,第一步是在我们打开一个新的数据线之前就初始化AudioFormat类实例。App类是应用程序的主类,可以进行所有的调用。我们在一个叫做ApplicationProperties的常量类中定义AudioFormat的属性。我们建立AudioFormat实例,绕过所有必要的参数。
public static AudioFormat buildAudioFormatInstance() {
ApplicationProperties aConstants = new ApplicationProperties();
AudioFormat.Encoding encoding = aConstants.ENCODING;
float rate = aConstants.RATE;
int channels = aConstants.CHANNELS;
int sampleSize = aConstants.SAMPLE_SIZE;
boolean bigEndian = aConstants.BIG_ENDIAN;
return new AudioFormat(encoding, rate, sampleSize, channels, (sampleSize / 8) * channels, rate, bigEndian);
}
Now that we have our AudioFormat ready, we can move ahead and build the TargetDataLine instance.
现在我们已经准备好了AudioFormat,我们可以继续前进并建立TargetDataLine实例。
4.2. Building the TargetDataLine
4.2.建立TargetDataLine
We use the TargetDataLine class to read audio data from our microphone. In our example, we get and run the TargetDataLine in the SoundRecorder class. The getTargetDataLineForRecord() method builds the TargetDataLine instance.
我们使用TargetDataLine类来从我们的麦克风读取音频数据。在我们的例子中,我们获取并运行SoundRecorder类中的TargetDataLine。getTargetDataLineForRecord()方法建立了TargetDataLine实例。
We read and processed audio input and dumped it in the AudioInputStream object. The way we create a TargetDataLine instance is:
我们读取并处理了音频输入,并将其倾倒在AudioInputStream对象中。我们创建一个TargetDataLine实例的方法是。
private TargetDataLine getTargetDataLineForRecord() {
TargetDataLine line;
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
if (!AudioSystem.isLineSupported(info)) {
return null;
}
line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format, line.getBufferSize());
return line;
}
4.3. Building and Filling the AudioInputStream
4.3.建立和填充AudioInputStream
So far in our example, we have created an AudioFormat instance and applied it to the TargetDataLine, and opened the data line to read audio data. We have also created a thread to help autorun the <em>SoundRecorder</em> instance. We first build a byte output stream when the thread runs and then convert it to an AudioInputStream instance. The parameters we need for building the AudioInputStream instance are:
到目前为止,在我们的例子中,我们已经创建了一个AudioFormat实例,并将其应用于TargetDataLine,,并打开数据线以读取音频数据。我们还创建了一个线程来帮助自动运行SoundRecorder实例。我们首先在线程运行时建立一个字节输出流,然后将其转换为AudioInputStream实例。我们在构建AudioInputStream实例时需要的参数是。
int frameSizeInBytes = format.getFrameSize();
int bufferLengthInFrames = line.getBufferSize() / 8;
final int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
Notice in the above code we have reduced the bufferSize by 8. We do so to make the buffer and the array the same length so that the recorder can then deliver the data to the line as soon as it is read.
注意在上面的代码中,我们把缓冲区的大小减少了8,我们这样做是为了使缓冲区和数组的长度相同,这样记录器就可以在读取数据后立即将数据传送到行中。
Now that we have initialized all the parameters needed, the next step is to build the byte output stream. The next step is to convert the output stream generated (sound data captured) to an AudioInputStream instance.
现在我们已经初始化了所有需要的参数,下一步是建立字节输出流。下一步是将生成的输出流(捕获的声音数据)转换成AudioInputStream实例。
buildByteOutputStream(out, line, frameSizeInBytes, bufferLengthInBytes);
this.audioInputStream = new AudioInputStream(line);
setAudioInputStream(convertToAudioIStream(out, frameSizeInBytes));
audioInputStream.reset();
Before we set the InputStream, we’ll build the byte OutputStream:
在我们设置InputStream之前,我们将建立字节OutputStream:。
public void buildByteOutputStream(final ByteArrayOutputStream out, final TargetDataLine line, int frameSizeInBytes, final int bufferLengthInBytes) throws IOException {
final byte[] data = new byte[bufferLengthInBytes];
int numBytesRead;
line.start();
while (thread != null) {
if ((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
break;
}
out.write(data, 0, numBytesRead);
}
}
We then convert the byte Outstream to an AudioInputStream as:
然后我们将字节Outstream转换为AudioInputStream作为。
public AudioInputStream convertToAudioIStream(final ByteArrayOutputStream out, int frameSizeInBytes) {
byte audioBytes[] = out.toByteArray();
ByteArrayInputStream bais = new ByteArrayInputStream(audioBytes);
AudioInputStream audioStream = new AudioInputStream(bais, format, audioBytes.length / frameSizeInBytes);
long milliseconds = (long) ((audioInputStream.getFrameLength() * 1000) / format.getFrameRate());
duration = milliseconds / 1000.0;
return audioStream;
}
4.4. Saving the AudioInputStream to a Wav File
4.4.将AudioInputStream保存为Wav文件
We have created and filled in the AudioInputStream and stored it as a member variable of the SoundRecorder class. We will retrieve this AudioInputStream in the App class by using the SoundRecorder instance getter property and pass it to the WaveDataUtil class:
我们已经创建并填写了AudioInputStream,并将其作为SoundRecorder类的成员变量存储。我们将在App类中通过使用SoundRecorder实例getter属性来检索这个AudioInputStream,并把它传递给WaveDataUtil类。
wd.saveToFile("/SoundClip", AudioFileFormat.Type.WAVE, soundRecorder.getAudioInputStream());
The WaveDataUtil class has the code to convert the AudioInputStream into a .wav file:
WaveDataUtil类有代码将AudioInputStream转换成.wav文件。
AudioSystem.write(audioInputStream, fileType, myFile);
5. Conclusion
5.总结
This article showed a quick example of using the Java Sound API to capture and record audio using a microphone. The entire code for this tutorial is available over on GitHub.
本文展示了一个快速的例子,即使用Java Sound API来捕捉和记录使用麦克风的音频。本教程的全部代码可在GitHub上获取。