Chains in LangChain are implementations of strategies used to interact with indexes.

They make interacting LLMs with your own data easier, especially effective when working with a large number of documents.

Stuffing

Stuffing feeds all the relevant data to the LLMs in a single prompt. This strategy works well with small pieces of data. However, this approach will not work when the documents exceed the prompt context length limit.

Map Reduce

This method runs each chunk of data through LLMs for summarisation or initial processing. It then combines the results into a single prompt to obtain the final result. Although it may lose some information during the process, it scales well for large amounts of data.

Refine

Similar to Map Reduce, this strategy takes a stepped approach. The difference is that it only runs the initial processing for the first chuck of data. The output is then passed in alongside the next document to be refined. This method can produce high-quality output compared to the parallel process of Map Reduce. The downside is that it does introduce some dependencies, and the execution will be slower than the parallel processing.

Map-Rerank

This method runs initial processing on each chunk of data and obtains a response and a confidence score. It then ranks the confidence score and returns the response with the highest confidence score. This strategy is only useful when your expected response is contained in a single document.