Lens Multisearch

Google • 2021-2023 • Interaction Design Lead

First announced at Search On '21, I led the design of multisearch(image+text multimodal search) starting as a experimental feature. Before then, Google Search was unimodal, you could either ask question with text, or image separately but combining them was not possible. There were lots of underlying user needs that could be satisfied if the capability is unlocked. Such as when you saw a washstand pipe that you can't describe but you want to fix it. Or when you saw a nice style of dress and wonder if it has different color, feature, or in in other brands.

This feature was featured at various Google events such as Search On, Google I/O.

Multisearch Searchbox

After successful launch of initial multisearch, the team decided to move multimodal core part of Searchbox, this was a challenging task that involves framework convergence, as well as redefining spatial model. We doubled down on the image being your search query, by literally "moving" your image to searchbox when you swipe up to see the result page. This significantly boosted user perception of the feature and became signature interaction for multisearch.

Multisearch capabilities now coupled with LLM (Search Generative Experience) and can help you in many more use cases. We also brought searchbox to upper funnel (searchbox expanded state) and allowed users to ask question before they go to the result page. This is especially helpful when user already has a question in mind.

Featured links :


Lens Multisearch

Google • 2021-2023 • Interaction Design Lead

First announced at Search On '21, I led the design of multisearch(image+text multimodal search) starting as a experimental feature. Before then, Google Search was unimodal, you could either ask question with text, or image separately but combining them was not possible. There were lots of underlying user needs that could be satisfied if the capability is unlocked. Such as when you saw a washstand pipe that you can't describe but you want to fix it. Or when you saw a nice style of dress and wonder if it has different color, feature, or in in other brands.

This feature was featured at various Google events such as Search On, Google I/O.

Multisearch Searchbox

After successful launch of initial multisearch, the team decided to move multimodal core part of Searchbox, this was a challenging task that involves framework convergence, as well as redefining spatial model. We doubled down on the image being your search query, by literally "moving" your image to searchbox when you swipe up to see the result page. This significantly boosted user perception of the feature and became signature interaction for multisearch.

Multisearch capabilities now coupled with LLM (Search Generative Experience) and can help you in many more use cases. We also brought searchbox to upper funnel (searchbox expanded state) and allowed users to ask question before they go to the result page. This is especially helpful when user already has a question in mind.

Featured links :


Lens Multisearch

Google • 2021-2023 • Interaction Design Lead

First announced at Search On '21, I led the design of multisearch(image+text multimodal search) starting as a experimental feature. Before then, Google Search was unimodal, you could either ask question with text, or image separately but combining them was not possible. There were lots of underlying user needs that could be satisfied if the capability is unlocked. Such as when you saw a washstand pipe that you can't describe but you want to fix it. Or when you saw a nice style of dress and wonder if it has different color, feature, or in in other brands.

This feature was featured at various Google events such as Search On, Google I/O.

Multisearch Searchbox

After successful launch of initial multisearch, the team decided to move multimodal core part of Searchbox, this was a challenging task that involves framework convergence, as well as redefining spatial model. We doubled down on the image being your search query, by literally "moving" your image to searchbox when you swipe up to see the result page. This significantly boosted user perception of the feature and became signature interaction for multisearch.

Multisearch capabilities now coupled with LLM (Search Generative Experience) and can help you in many more use cases. We also brought searchbox to upper funnel (searchbox expanded state) and allowed users to ask question before they go to the result page. This is especially helpful when user already has a question in mind.

Featured links :