December Papers: MoE, Fact-storing and Byteifying Language Models
Despite the holiday season and the busy NeurIPS period, December closed the year with set of insightful papers. Our team reviewed the following three papers:
Despite the holiday season and the busy NeurIPS period, December closed the year with set of insightful papers. Our team reviewed the following three papers:
November is back to a favourite topic of ours: efficiency. We reviewed three of our favorite papers looking on LLM efficiency from different angles:
Knowledge Graphs in Drug Discovery Repurposing existing drugs to treat diseases beyond what they were originally designed for can be a way to identify new di...
October was packed with insights into making language models faster and smarter. We reviewed four of our favorite papers for you in detail:
For September, the research team reviewed a whopping 22 papers! Needless to say, competition was fierce, and only four made the final cut for this month’s ed...