Are you serious? Yeah, its using gemini 2.5 pro without a server, sure yeah.
eisbaw 6 hours ago [-]
Why not telnet?
accrual 5 hours ago [-]
I'd love to see an LLM outputting over a Teletype. Just tschtschtschtsch as it hammers away the paper feed.
cap11235 5 hours ago [-]
Last week or so, there was the LLM finetune posted that speaks like a 19th century Irish author. I look forward a bit to having an LLModem model.
RALaBarge 6 hours ago [-]
No HTTPS support
benterix 6 hours ago [-]
I bet someone can write an API Gateway for this...
kimjune01 3 days ago [-]
hey i just tried it. it's cool! i wish it was more self aware
ccbikai 2 days ago [-]
Thank you for your feedback; I will optimize the prompt
dncornholio 6 hours ago [-]
Using React to render a CLI tool is something. I'm not sure how I feel about that. It feels like like 90% of the code is handling issues with rendering.
demosthanos 5 hours ago [-]
I mean, it's a thin wrapper around LLM APIs, so it's not surprising that most of the code is rendering. I'm not sure what you're referring to by "handling issues with rendering", though—it looks like a pretty bog standard React app. Am I missing something?
Previously discussions of Ink:
July 2017 (129 points, 42 comments): https://news.ycombinator.com/item?id=14831961
May 2023 (588 points, 178 comments): https://news.ycombinator.com/item?id=35863837
Nov 2024 (164 points, 106 comments): https://news.ycombinator.com/item?id=42016639
But that seems not a possibility in the modern days of software distribution, especially with GPU-dependent stuff like LLMs.
So yeah, I get why this exists.
https://tinyfugue.sourceforge.net/
https://en.wikipedia.org/wiki/List_of_MUD_clients
https://github.com/ccbikai/ssh-ai-chat/blob/master/src/ai/in...
https://terminal.odai.chat
Welcome to help maintain it with me