<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>llm — FixDevs</title><description>Latest fixes and solutions for llm errors on FixDevs.</description><link>https://fixdevs.com/</link><language>en</language><lastBuildDate>Thu, 09 Apr 2026 00:00:00 GMT</lastBuildDate><atom:link href="https://fixdevs.com/tags/llm/rss.xml" rel="self" type="application/rss+xml"/><item><title>Fix: CrewAI Not Working — Agent Delegation, Task Context, and LLM Configuration Errors</title><link>https://fixdevs.com/blog/crewai-not-working/</link><guid isPermaLink="true">https://fixdevs.com/blog/crewai-not-working/</guid><description>How to fix CrewAI errors — LLM not configured ValidationError, agent delegation loop, task context not passed between agents, tool output truncated, process hierarchical vs sequential, and memory not persisting across runs.</description><pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate><category>python</category><category>crewai</category><category>llm</category><category>agents</category><category>multi-agent</category><category>ai</category><category>debugging</category><author>FixDevs</author></item><item><title>Fix: LangGraph Not Working — State Errors, Checkpointer Setup, and Cyclic Graph Failures</title><link>https://fixdevs.com/blog/langgraph-not-working/</link><guid isPermaLink="true">https://fixdevs.com/blog/langgraph-not-working/</guid><description>How to fix LangGraph errors — state not updating between nodes, checkpointer thread_id required, StateGraph compile error, conditional edges not routing, streaming events missing, recursion limit exceeded, and interrupt handling.</description><pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate><category>python</category><category>langgraph</category><category>langchain</category><category>llm</category><category>agents</category><category>ai</category><category>debugging</category><author>FixDevs</author></item><item><title>Fix: LlamaIndex Not Working — Import Errors, Vector Store Issues, and Query Engine Failures</title><link>https://fixdevs.com/blog/llamaindex-not-working/</link><guid isPermaLink="true">https://fixdevs.com/blog/llamaindex-not-working/</guid><description>How to fix LlamaIndex errors — ImportError llama_index.core module not found, ServiceContext deprecated use Settings instead, vector store index not persisting, query engine returns irrelevant results, and LlamaIndex 0.10 migration.</description><pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate><category>python</category><category>llamaindex</category><category>llama-index</category><category>llm</category><category>rag</category><category>vector-search</category><category>debugging</category><author>FixDevs</author></item><item><title>Fix: vLLM Not Working — CUDA OOM, Model Loading, and API Server Errors</title><link>https://fixdevs.com/blog/vllm-not-working/</link><guid isPermaLink="true">https://fixdevs.com/blog/vllm-not-working/</guid><description>How to fix vLLM errors — CUDA out of memory during model load, tokenizer mismatch with HuggingFace, tensor parallel size does not match GPU count, KV cache exceeds memory, OpenAI API compatibility issues, and max_model_len too large.</description><pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate><category>python</category><category>vllm</category><category>llm</category><category>inference</category><category>machine-learning</category><category>gpu</category><category>debugging</category><author>FixDevs</author></item><item><title>Fix: Hugging Face Transformers Not Working — OSError, CUDA OOM, and Generation Errors</title><link>https://fixdevs.com/blog/huggingface-transformers-not-working/</link><guid isPermaLink="true">https://fixdevs.com/blog/huggingface-transformers-not-working/</guid><description>How to fix Hugging Face Transformers errors — OSError can&apos;t load tokenizer, gated repo access, CUDA out of memory with device_map auto, bitsandbytes not installed, tokenizer padding mismatch, pad_token_id warning, and LoRA adapter loading failures.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>python</category><category>huggingface</category><category>transformers</category><category>llm</category><category>ai</category><category>debugging</category><author>FixDevs</author></item><item><title>Fix: LangChain Python Not Working — ImportError, Pydantic, and Deprecated Classes</title><link>https://fixdevs.com/blog/langchain-python-not-working/</link><guid isPermaLink="true">https://fixdevs.com/blog/langchain-python-not-working/</guid><description>How to fix LangChain Python errors — ImportError from package split, Pydantic v2 compatibility, AgentExecutor deprecated, ConversationBufferMemory removed, LCEL output type mismatches, and tool calling failures.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>python</category><category>langchain</category><category>llm</category><category>ai</category><category>debugging</category><author>FixDevs</author></item><item><title>Fix: Ollama Not Working — Connection Refused, Model Not Found, GPU Not Detected</title><link>https://fixdevs.com/blog/ollama-not-working/</link><guid isPermaLink="true">https://fixdevs.com/blog/ollama-not-working/</guid><description>How to fix Ollama errors — connection refused when the daemon isn&apos;t running, model not found, GPU not detected falling back to CPU, port 11434 already in use, VRAM exhausted, and API access from other machines.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>ollama</category><category>llm</category><category>ai</category><category>gpu</category><category>debugging</category><author>FixDevs</author></item></channel></rss>