Skip to main content

GPT-4 Turbo's 64k Triumph: Claude-2.1 Trips Up

·37 words·1 min

GPT-4 Turbo’s large context looks looks real deal. This could have been one of the biggest improvements. It survives pressure testing all the way to 64k context length amazingly well while Claude-2.1 stumbles a lot. https://x.com/SteveMoraco/status/1727370446788530236

Discussion