How to Use AI for Research: Elicit.org for writing a literature review
BU CARES
2.77K subscribers
<__slot-el>
Subscribed
1.1K
Share
45,897 views Jan 27, 2023
Here's the description that was written with AI based on the prompt: write a youtube description for a video about how I use Elicit.org to write a literature review. Hi everyone! In this video, I'm going to show you how I used Elicit.org to write my literature review. Elicit.org is an amazing online tool that makes the process of writing a literature review easier and more efficient. It's free to use and provides a wealth of resources that can help you get the job done quickly. I'm going to show you how I used Elicit.org to create my own literature review and provide some tips and tricks along the way to make sure you get the most out of the process. So, join me as I use Elicit.org to write my literature review and make sure you stay tuned till the end for some helpful advice! (written with Canva Magic Write) I'll share some considerations around avoiding plagiarism and upholding academic integrity, along with suggestions for ways to filter and find the types of papers you're looking for, or how to see information at a glance. Here is a helpful resource for teachings who are thinking about how to incorporate ChatGPT or AI technology into their classrooms and teaching practice: https://ditchthattextbook.com/ai/#t-1...
BU CARES
how to get a personal research assistant powered by AI. The Research Assistants: Elicit: https://elicit.org/ ConnectedPapers: https://www.connectedpapers.com/ Research Rabbit: https://www.researchrabbit.ai/ MY COURSES: Cite to Write is a deep dive into how to use Roam Research in academia, and covers everything I've learned using Roam as the note-taking tool for my PhD thesis: https://www.cortexfutura.com/p/cite-t... MY FAVOURITE SOFTWARE: Thinking and writing (Roam Research) - https://roamresearch.com Getting highlights and notes from many places into Roam (Readwise) – https://readwise.io LETS BE FRIENDS: Sign up to my newsletter: https://signup.cortexfutura.com/yt-si... My website: https://www.cortexfutura.com Twitter - https://twitter.com/cortexfutura
https://www.youtube.com/watch?v=xeRJiCCpGro?
Last updated: April 2022
Elicit is a research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table.
If you’d like to learn more, please review the resources in this section.
As of early 2022, Elicit’s users are primarily researchers (students and researchers in academia, at independent organizations, or operating independently). They find Elicit most valuable for finding papers to cite and defining research directions.
Some of our most engaged researchers report using Elicit to find initial leads for papers, answer questions, and get perfect scores on exams. One researcher used a combination of Elicit Literature Review, Rephrase, and Summarization tasks to compile a literature review for publication.
Our Twitter page shows more examples of researcher feedback and how people are using Elicit. Our YouTube page showcases different workflows to try.
With the Elicit Literature Review workflow, you can:
The other “Tasks” in Elicit include Elicit & user-created research tasks that don't exist anywhere else. These tasks can help you brainstorm research questions, summarize paragraphs, and rephrase snippets of text.
Elicit is an early-stage product, with updates and improvements every week (as documented on our mailing list). As of April 2022, the Literature Review workflow is implemented as follows:
A more detailed description is given below.
To help you calibrate how much you can rely on Elicit, we’ll share some of the limitations you should be aware of as you use Elicit:
This section is really way too short. We tried to share enough to make you not overrely on Elicit but this is not a comprehensive list of possible limitations.
Elicit is built by Ought, a non-profit machine learning research lab with a team of eight people distributed across the Bay Area, Austin, New York, and Oristà. Our team brings experiences from academia, mature tech, and startups. Ought is funded by grants from organizations like Open Philanthropy, Jaan Tallin, Future of Life Institute, and other individuals identifying with the effective altruism and longtermism communities. Our funders and team are primarily motivated by making sure that artificial intelligence goes well for the world, in part by being useful for high-quality work like research. Elicit is the only project that Ought currently works on.
You can email help@elicit.org or send a message in the #support channel in the Elicit Slack Workspace. If your request has an Error ID please share it with us. That can help us resolve issues faster. Screenshots and screen recordings of the problem also help.
In bibtex, you can use the following snippet:
@software{elicit, author = {{Ought}}, title = {Elicit: The AI Research Assistant}, url = {https://elicit.org}, year = {2023}, date = {2023-02-22},}
In other cases, anything which includes the elicit.org
URL is fine, for example:
Ought; Elicit: The AI Research Assistant; https://elicit.org; accessed xxxx/xx/xx
You can email info@elicit.org or send a message in the #feature-requests channel in the Elicit Slack Workspace.
If you have specific requests or would like to participate in user interviews and product research sessions, please email info@elicit.org or send a message in the Elicit Slack Workspace.
About once a quarter, we’ll send out a feedback survey to learn about your overall experiences with Elicit. You can see the results from past feedback surveys here and here.
We find your feedback really valuable. We read and log every piece of feedback we’ve gotten so far.
Thank you for your willingness to help! Here are some ways to contribute:
Note: The prompts employed by Elicit are regularly updated, as we add new functionality and discover new techniques which lead to better results. An example is given below, but it is not a canonical reference.
For the prompt-based Instruct model, the prompt looks like this, with “...” replaced with the query and paper details:
Answer the question "..." based on the extract from a research paper.Try to answer, but say "... not mentioned in the paper" if you really don't know how to answer.Include everything that the paper excerpt has to say about the answer.Make sure everything you say is supported by the extract.Answer in one phrase or sentence.Paper title: ...Paper excerpt: ...Question: ...Answer:
When you enter a question, we find the most semantically similar papers from Semantic Scholar’s database:
We find and parse PDFs of open access papers using Unpaywall and Grobid:
If you star papers and click “show more like starred”, we retrieve paper candidates by using the Semantic Scholar API to find papers that have cited the starred papers, and papers that were cited by the starred papers. We then re-rank these papers [using the same method as above]
For each of the top papers, we extract key information like “outcome measured”, “Intervention”, and “sample size” and show them in columns in Elicit.