UI libraries / InstantSearch.js / Widgets
Signature
voiceSearch({
  container: string|HTMLElement,
  // Optional parameters
  searchAsYouSpeak: boolean,
  language: string,
  additionalQueryParameters: object,
  templates: object,
  cssClasses: object,
});
Import
1
import { voiceSearch } from 'instantsearch.js/es/widgets';

About this widget # A

The voiceSearch widget lets users perform a voice-based query.

It uses the Web Speech API, which only Chrome (from version 25) has implemented so far. This means the voiceSearch widget only works on desktop Chrome and Android Chrome. It doesn’t work on iOS Chrome, which uses the iOS WebKit.

Examples # A

1
2
3
voiceSearch({
  container: '#voicesearch',
});

Options # A

Parameter Description
container #
type: string|HTMLElement
Required

The CSS Selector or HTMLElement to insert the widget into.

1
2
3
voiceSearch({
  container: '#voicesearch',
});
searchAsYouSpeak #
type: boolean
default: false
Optional

Whether to trigger the search as you speak. If false, search is triggered only after speech is finished. If true, search is triggered whenever the engine delivers an interim transcript.

1
2
3
4
voiceSearch({
  // ...
  searchAsYouSpeak: true,
});
language #
type: string
default: all languages
Optional

The language you want your voiceSearch widget to recognize. Note that the default (all languages) can result in false positives. For example, an English word you pronounce might be recognized as a French word, which can cause irrelevant results. Make sure to give a BCP 47 language tag, like en-US or fr-FR. This language is automatically forwarded to the queryLanguages query parameter.

1
2
3
4
voiceSearch({
  // ...
  language: 'en-US',
});
additionalQueryParameters #
type: function
Optional

A function that receives the current query and returns the list of search parameters you want to enable for voice search.

By default, we set the following query parameters:

To disable this behavior, override the return value of additionalQueryParameters.

1
2
3
4
5
6
7
8
voiceSearch({
  // ...
  additionalQueryParameters(query) {
    return {
      ignorePlurals: false,
    };
  },
});
templates #
type: object
Optional

The templates to use for the widget.

1
2
3
4
5
6
voiceSearch({
  // ...
  templates: {
    // ...
  },
});
cssClasses #
type: object

The CSS classes you can override:

  • root: the root element of the widget.
  • button: the button element.
  • status: the status element.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
voiceSearch({
  // ...
  cssClasses: {
    root: 'MyCustomVoiceSearch',
    button: [
      'MyCustomVoiceSearchButton',
      'MyCustomVoiceSearchButton--subclass',
    ],
    status: [
      'MyCustomVoiceSearchStatus',
      'MyCustomVoiceSearchStatus--subclass',
    ]
  },
});

Templates # A

You can customize parts of the widget’s UI using the Templates API.

Every template provides an html function you can use as a tagged template. Using html lets you safely provide templates as an HTML string. It works directly in the browser without a build step. See Templating your UI for more information.

The html function is available starting from v4.46.0.

Parameter Description
buttonText #
type: string|function
Optional

The template used for displaying the button.

1
2
3
4
5
6
7
8
9
10
11
voiceSearch({
  // ...
  templates: {
    buttonText(
      { isListening, status, errorCode, transcript, isSpeechFinal, isBrowserSupported },
      { html }
    ) {
      return html`<span>${isListening ? 'Stop' : 'Start'}</span>`;
    },
  },
});
status #
type: string|function
Optional

The template used for displaying the status.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
voiceSearch({
  // ...
  templates: {
    status(
      { isListening, status, errorCode, transcript, isSpeechFinal, isBrowserSupported },
      { html }
    ) {
      const className = isListening ? 'listening' : 'not-listening';
      return html`
        <div class="${className}">
          <span>${transcript ? transcript : ''}</span>
        </div>
      `;
    },
  },
});

HTML output# A

1
2
3
4
5
6
7
8
<div class="ais-VoiceSearch">
  <button class="ais-VoiceSearch-button" type="button" title="Search by voice">
    ...
  </button>
  <div class="ais-VoiceSearch-status">
    ...
  </div>
</div>

Customize the UI with connectVoiceSearch# A

If you want to create your own UI of the voiceSearch widget, you can use connectors.

To use connectVoiceSearch, you can import it with the declaration relevant to how you installed InstantSearch.js.

1
import { connectVoiceSearch } from 'instantsearch.js/es/connectors';

Then it’s a 3-step process:

// 1. Create a render function
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  // Rendering logic
};

// 2. Create the custom widget
const customVoiceSearch = connectVoiceSearch(
  renderVoiceSearch
);

// 3. Instantiate
search.addWidgets([
  customVoiceSearch({
    // instance params
  })
]);

Create a render function#

This rendering function is called before the first search (init lifecycle step) and each time results come back from Algolia (render lifecycle step).

const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const {
    boolean isBrowserSupported,
    boolean isListening,
    function toggleListening,
    object voiceListeningState,
    object widgetParams,
  } = renderOptions;

  if (isFirstRender) {
    // Do some initial rendering and bind events
  }

  // Render the widget
}

Render options #

Parameter Description
isBrowserSupported #
type: boolean

true if user’s browser supports voice search.

1
2
3
4
5
6
7
8
9
10
11
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { isBrowserSupported } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender && !isBrowserSupported) {
    const message = document.createElement('p');
    message.innerText = 'This browser is not supported.';
    container.appendChild(message);
  }
};
toggleListening #
type: function

Starts listening to user’s speech, or stops it if already listening.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { toggleListening } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const button = document.createElement('button');
    button.textContent = 'Toggle';

    button.addEventListener('click', event => {
      toggleListening();
    })

    container.appendChild(button);
  }
};
isListening #
type: boolean

true if listening to user’s speech.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { isListening, toggleListening } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const button = document.createElement('button');
    button.textContent = 'Toggle';

    button.addEventListener('click', event => {
      toggleListening();
    })

    container.appendChild(button);
  }

  container.querySelector('button').textContent =
    isListening ? 'Stop' : 'Start';
};
voiceListeningState #
type: boolean

An object containing the following states regarding speech recognition:

  • status: string: current status (initial|askingPermission| waiting|recognizing|finished|error).
  • transcript: string: currently recognized transcript.
  • isSpeechFinal: boolean: true if speech recognition is finished.
  • errorCode: string|undefined: an error code (if any). Refer to the spec for more information.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { voiceListeningState, toggleListening } = renderOptions;
  const {
    status,
    transcript,
    isSpeechFinal,
    errorCode,
  } = voiceListeningState;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const state = document.createElement('div');
    state.innerHTML = `
      <p>status : <span class="status"></span></p>
      <p>transcript : <span class="transcript"></span></p>
      <p>isSpeechFinal : <span class="is-speech-final"></span></p>
      <p>errorCode : <span class="error-code"></span></p>
    `;
    container.appendChild(state);

    const button = document.createElement('button');
    button.textContent = 'Toggle';
    button.addEventListener('click', event => {
      toggleListening();
    })
    container.appendChild(button);
  }
  container.querySelector('.status').innerText = status;
  container.querySelector('.transcript').innerText = transcript;
  container.querySelector('.is-speech-final').innerText = isSpeechFinal;
  container.querySelector('.error-code').innerText = errorCode || '';
};
widgetParams #
type: object

All original widget options forwarded to the render function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { widgetParams } = renderOptions;

  widgetParams.container.innerHTML = '...';
};

const customVoiceSearch = connectVoiceSearch(
  renderVoiceSearch
);

search.addWidgets([
  customVoiceSearch({
    container: document.querySelector('#voicesearch'),
  })
]);

Create and instantiate the custom widget#

We first create custom widgets from our rendering function, then we instantiate them. When doing that, there are two types of parameters you can give:

  • Instance parameters: they are predefined parameters that you can use to configure the behavior of Algolia.
  • Your own parameters: to make the custom widget generic.

Both instance and custom parameters are available in connector.widgetParams, inside the renderFunction.

const customVoiceSearch = connectVoiceSearch(
  renderVoiceSearch
);

search.addWidgets([
  customVoiceSearch({
    // Optional parameters
    searchAsYouSpeak: boolean,
  })
]);

Instance options #

Parameter Description
searchAsYouSpeak #
type: boolean
default: false
Optional

Whether to trigger the search as you speak. If false, search is triggered only after speech is finished. If true, search is triggered as many times as the engine delivers an interim transcript.

1
2
3
customVoiceSearch({
  searchAsYouSpeak: true,
});
language #
type: string
default: all languages
Optional

The language you want your voiceSearch widget to recognize. Note that the default (all languages) can result in false positives. For example, an English word you pronounce might be recognized as a French word, which can cause irrelevant results. Make sure to give a BCP 47 language tag, like en-US or fr-FR. This language is automatically forwarded to the queryLanguages query parameter.

1
2
3
4
customVoiceSearch({
  // ...
  language: 'en-US',
});
additionalQueryParameters #
type: function
Optional

A function that receives the current query and returns the list of search parameters you want to enable for voice search.

By default, we set the following query parameters:

To disable this behavior, override the return value of additionalQueryParameters.

1
2
3
4
5
6
7
8
customVoiceSearch({
  // ...
  additionalQueryParameters(query) {
    return {
      ignorePlurals: false,
    };
  },
});

Full example#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// Create a render function
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { isListening, toggleListening, voiceListeningState } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const button = document.createElement('button');
    button.addEventListener('click', event => {
      toggleListening();
    })
    container.appendChild(button);

    const state = document.createElement('pre');
    container.appendChild(state)
  }

  container.querySelector('button').textContent =
    isListening ? 'Stop' : 'Start';

  container.querySelector('pre').textContent =
    JSON.stringify(voiceListeningState, null, 2);
};

// create custom widget
const customVoiceSearch = connectVoiceSearch(
  renderVoiceSearch
);

// instantiate custom widget
search.addWidgets([
  customVoiceSearch({
    container: document.querySelector('#voicesearch'),
  })
]);
Did you find this page helpful?