GenAI Tools

ChatGPT (OpenAI) started the whole GenAI revolution when it was released in late 2022. It quickly reached 100 million users, becoming the fastest-adopted technology in history. Despite alternatives that have emerged since then, ChatGPT essentially remains a default choice when it comes to GenAI tools. OpenAI helps maintain this position by regularly introducing new state-of-the-art features, such as real-time vision model. You can access ChatGPT for free, however, advanced features require a subscription.

Some users report better results with Claude (Anthropic) or prefer it because they dislike OpenAI‘s business practices. Anthropic claims their approach to AI development is more responsible, though one has to be careful with such claims. Personally, I switched from ChatGPT to Claude for most of my tasks but this mostly relates to stylistic preferences.

The third major player is Gemini (Google), which had a less successful initial launch. Google has tried to compensate by launching products like NotebookLM which offers a somewhat different interface from competitors and introduces curious features such as generating audio podcasts from text. When I tested this with one of my papers, the result was surprisingly good.

Many of my colleagues are paying for all three main AI-assistants, making the total bill quite expensive. I don’t think this is really necessary. The fierce competition between major companies ensures that the capabilities of these tools remain quite similar, making the choice largely a matter of taste.

Another notable tool is Microsoft Copilot. While it‘s powered by the same technology as ChatGPT (though not always the latest version), what makes it different is its tight integration with Microsoft Office. I don’t use Microsoft products in my research, so it‘s not so interesting to me, but my colleagues in the administrative department find it quite useful for their everyday tasks. Another important distinction is that it can provide additional data protection. For example, at UTS, Copilot operates within an environment that ensures all computations and data storage occur on Australian servers in an isolated environment created specifically for our university. This means it can be used for research tasks where data sensitivity would not allow the use of other services.

Finally, it’s worth knowing about Llama (Meta) models, as they can be run locally on your computer. While you probably won‘t be able to run the most advanced models—and even their best model lags behind ChatGPT and Claude—running locally offers the highest possible level of privacy since everything happens on your own computer. Moreover, you don’t need to pay for a subscription. The easiest way to run Llama that doesn’t require any technical expertise is through software like LM Studio.I mention LM Studio as it is currently free and user-friendly but I imagine that the situation might change in the future and better alternatives might become available. 

There are, of course, many other tools built on top of large language models. However, the goal of this course is not to provide tutorials for different tools but rather to develop a general understanding of how LLM-based assistants can be useful in research. I believe that equipped with this knowledge, you will be well-positioned to explore available and emerging tools and decide which ones work best for you.