We would like to improve Harmony’s matching algorithm. Quite often, Harmony mistakenly thinks that sentences are similar when a psychologist would consider them dissimilar, or vice versa. We have evaluated Harmony’s performance in this blog post.
Harmony is often misaligned with human evaluators
We would like to improve Harmony with a fine tuned large language model. We have teamed up with DOXA AI and made an online competition where you can improve on the off-the-shelf LLMs which we are currently using. You can win up to £500 in vouchers! Click here to join the Harmony matching competition on DOXA AI.
We will have a livestreamed webinar/onboarding session to launch the competion on Wednesday 30 October at 5pm UK time.
The prize for the winner of the competition is £500 in vouchers and the runner up will get £250 in vouchers.
The Harmony team has recently published a paper in BMC Psychiatry showing that there is a correlation between Harmony’s cosine similarity values and human evaluators, but this could be improved:
Harmony at Women In Data™️ London Chapter (online event) On 22 November, we will present Harmony at Women In Data™️ London Chapter’s event on Application of Generative AI and LLMs. Thomas Wood will demonstrate Harmony and how it uses Gen AI. The event will be livestreamed. ⏲️ 25 minutes talk + 10 minutes Q&A 📅 Date: November 22nd 2024, 6:15 pm 📝 RSVP See also our past events 22 November 2024: Harmony at Women In Data™️ London Chapter 30 October 2024: Onboarding webinar for DOXA AI competition 8 October 2024: Harmony: a free online tool using LLMs for research in psychology and social sciences at AI|DL London 11 and 12 September 2024: Harmony at MethodsCon Futures in Manchester 2 July 2024: Harmony: NLP and generative models for psychology research at Pydata London 3 June 2024: Harmony Hackathon at UCL 5 May 2024: Harmony: A global platform for harmonisation, translation and cooperation in mental health at Melbourne Children’s LifeCourse Initiative seminar series.
How are research funders reacting to the AI governance vacuum? A recent article by Sense about Science, a leading independent charity that promotes the public interest in sound science and evidence, highlights the growing need for responsible AI governance in research. The article, titled Research funders tackle AI governance vacuum with pragmatic guidance, discusses the alarming gap between the rapid development and adoption of AI tools, and the lack of clear frameworks for their safe and ethical use.