[{"content":" Portfolio Intro # Welcome to my portfolio. This section introduces my work and background.\nAbout # Add your introduction content here.\nSkills # List your key skills and competencies.\nProjects # Highlight your featured projects.\n","externalUrl":null,"permalink":"/portfolio-hugo/posts/1-intro/","section":"Posts","summary":"","title":"Intro","type":"posts"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/ai/","section":"Tags","summary":"","title":"Ai","type":"tags"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/ai/","section":"Categories","summary":"","title":"AI","type":"categories"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/authors/bubberr/","section":"Authors","summary":"","title":"Bubberr","type":"authors"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":" Background # As part of our semester, our entire class had a visit from E.G. — a large estate with a wide range of activities and events. The visit gave us a chance to talk directly with the people running the place and get a feel for how they operate day-to-day.\nOne of the things that caught my attention was their annual Christmas market, where external vendors rent stalls and sell everything from handmade crafts to holiday decorations. It sounds straightforward — but behind the scenes, there is a surprising amount of coordination involved.\nThe Problem: An Inbox Full of Stall Requests # Right now, anyone who wants to rent a stall at the Christmas market has to contact the owner directly by email. She then reads through every single enquiry and replies individually — confirming or declining each one by hand.\nWhen the market is popular, that means dozens of back-and-forth emails to manage, no central overview of who has applied, and a lot of time spent on something that could largely run itself.\nThe Idea: A Vendor Application Form # The fix does not need to be complicated. A simple form on E.G.\u0026rsquo;s existing website would let vendors fill in their details — name, contact info, what they sell, any practical requirements — and submit their application in one go.\nOn the owner\u0026rsquo;s side, instead of an inbox full of emails, she gets a clean list of all applicants where she can confirm or decline each one with a single click. The system then sends the appropriate response automatically.\nThe core flow would look like this:\nVendor fills out form on the website ↓ Application added to owner\u0026#39;s dashboard ↓ Owner reviews and confirms or declines ↓ Vendor receives automatic confirmation or rejection email No inbox archaeology. No manually written replies. Just a list to work through.\nWhere AI Fits In # The interesting part is that AI can add real value at several layers of a system like this — not just as a gimmick, but as something that actually solves problems.\n1. Smarter Confirmation Emails # Once a vendor is confirmed, a language model can use the data they submitted — name, product type, any special requirements — to generate a confirmation email that feels personal rather than templated:\n\u0026ldquo;Hi Maren, your application has been approved! We\u0026rsquo;re looking forward to seeing your homemade Christmas wreaths at this year\u0026rsquo;s market. You\u0026rsquo;ll find all the practical details below\u0026hellip;\u0026rdquo;\nThe owner clicks confirm once. The vendor gets a message that feels like it was written for them.\n2. Answering Vendor Questions # A simple chat assistant on E.G.\u0026rsquo;s website — trained on their rules, FAQ, and practical info — could handle 80% of the questions vendors ask, without anyone at E.G. needing to respond manually.\n3. AI as a Tool for the Developer # And this is what actually surprised me most when thinking through the E.G. case: AI is not only useful inside the system. It is also useful during development.\nWhen mapping out the form fields and the owner\u0026rsquo;s dashboard, I used Claude to:\nAsk the questions I had not thought to ask myself Draft template emails based on my description of the audience Suggest edge cases (\u0026ldquo;what happens if a vendor cancels two days before the market?\u0026rdquo;, \u0026ldquo;should declined applicants be able to reapply?\u0026rdquo;) It accelerated the design phase significantly. Instead of sitting alone trying to think things through, I had a sounding board that could quickly generate alternatives and point out gaps.\nOther Systems E.G. Could Benefit From # Beyond the application form, there are other areas where structured thinking and simple tooling could make a real difference:\nArea Current problem Possible solution Booking management Manual handling of enquiries Simple booking app with calendar integration Payment overview Unclear who has paid what Automated invoicing with a status dashboard Internal communication No overview across events A lightweight internal notification system Vendor feedback No structured post-event evaluation Automatic follow-up email with a feedback form None of these require advanced technology — but they do require someone to think them through and build them properly.\nWhat I Took Away # Visiting E.G. as a class was my first real encounter with a customer who does not know exactly what they are missing — only that something feels laborious. That is a fundamentally different problem than building to a specification.\nIt taught me that system design starts with listening, and that the best solutions are often the ones that remove work rather than add features.\nAnd it reinforced something I keep coming back to: AI is not a plugin you bolt on at the end. It can sit at the table from the very beginning — from idea through to implementation.\n","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/customer-case/","section":"Posts","summary":"","title":"Customer Case: What a Christmas Market Taught Me About System Design","type":"posts"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/customer-case/","section":"Tags","summary":"","title":"Customer-Case","type":"tags"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/email/","section":"Tags","summary":"","title":"Email","type":"tags"},{"content":"Welcome to my portfolio site.\n","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/","section":"Forside","summary":"","title":"Forside","type":"page"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/projects/","section":"Categories","summary":"","title":"Projects","type":"categories"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/system-design/","section":"Tags","summary":"","title":"System-Design","type":"tags"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/code-agents/","section":"Tags","summary":"","title":"Code-Agents","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/ethics/","section":"Tags","summary":"","title":"Ethics","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/gdpr/","section":"Tags","summary":"","title":"Gdpr","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/method/","section":"Categories","summary":"","title":"Method","type":"categories"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/software-development/","section":"Tags","summary":"","title":"Software-Development","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/spec/","section":"Tags","summary":"","title":"Spec","type":"tags"},{"content":" What is a spec, and why does it matter now? # A specification is a precise description of what a system should do — not how. It captures requirements, flows, acceptance criteria, and constraints in a single governing document. In traditional development, specs are used to communicate between stakeholders and developers. In AI-driven development, they take on a new role: they are instructions to an agent.\nThat is the critical difference. When I work with a code agent like Claude Code, having a loose idea of what I want is not enough. The agent acts on what I say — and nothing more. A vague task produces a vague result.\nThe spec as a governing artefact # Spec-driven development means letting the specification drive the entire process: requirements → design → implementation → review. Instead of improvising along the way, you start by defining what must be true when a feature is done.\nThat requires thinking in terms of:\nElement Example Requirement \u0026ldquo;The system must validate input against the rubric before sending it to the LLM\u0026rdquo; Flow \u0026ldquo;User submits text → controller validates → service builds prompt → LLM responds → frontend renders\u0026rdquo; Acceptance criteria \u0026ldquo;Reject with 400 if assignmentText is empty or \u0026gt; 50,000 characters\u0026rdquo; Constraints \u0026ldquo;No user data may be persisted — the system is stateless\u0026rdquo; These elements are not just documentation. They are the contract a code agent works from.\nPeter Naur and the theory behind the code # In 1985, Peter Naur described programming as theory building — the idea that the most important product of software development is not the code, but the mental model (the theory) the developer builds around the problem and its solution. The code is merely a deposit of that theory.\nThat is a provocative thought at a time when code agents can produce code faster than we can read it. If the theory lives in the developer\u0026rsquo;s head but the code is written by an agent — who owns the understanding?\nMy answer: the theory must live in the specification. The spec is the place where human understanding is formalised and made accessible — both for agents that implement, and for future developers who take over. A good spec is a written-down theory.\nLegal and ethical considerations # When AI is part of development, questions arise that are not purely technical.\nGDPR and data handling — if a code agent has access to files containing personal data, you need to consider what gets sent to an external API. In the LLM API project, we addressed this with a stateless design: no user data is stored, and only the assignment text (without personal identifiers) is passed on. That is a design decision driven by a constraint in the spec.\nBias and fairness — an LLM that assesses student reports may have systematic biases. If the model consistently scores certain writing patterns lower, that is not just a technical problem — it is a fairness problem. The specification should include a requirement that output is reviewed by a human, and that the model is never the final judge.\nAccountability — who is responsible if a code agent introduces a security vulnerability? The person who prompted it. Specs are a way of documenting the intent behind the code, so it becomes possible to tell whether a bug is a faulty implementation of a correct spec, or a flaw in the spec itself.\nHow I plan to use specs going forward # I have learned to prompt on the fly — giving the agent context in the moment. It works, but it does not scale. The next time I start a project I will:\nWrite a feature spec before giving the agent a task. Not necessarily a long document — five to ten lines defining requirements, flow, and acceptance criteria is often enough.\nUse the spec as a review baseline. When the agent delivers a solution, I hold it up against the spec. That is faster than guessing whether the result is \u0026ldquo;good enough\u0026rdquo;.\nDocument constraints explicitly. Things like \u0026ldquo;no user data is persisted\u0026rdquo; or \u0026ldquo;output is always reviewed by a human\u0026rdquo; belong in the spec — not as a verbal agreement.\nLet the spec drive acceptance tests. Each acceptance criterion is a test case. The agent can help write the tests, but the criteria must come from me.\nReflection # Spec-driven development is not a bureaucratic exercise. It is a way of honouring what Naur called theory building — ensuring that human understanding is not lost in the speed that AI-driven development gives us.\nA code agent is powerful but uncritical. It does what it is told. That means the quality of my output is directly proportional to the quality of my spec. That is a discipline, not a limitation — and one I intend to practise.\n","date":"4 May 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/spec-driven-dev/","section":"Posts","summary":"","title":"Spec-Driven Development in an AI-Driven World","type":"posts"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/agents/","section":"Tags","summary":"","title":"Agents","type":"tags"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/api/","section":"Tags","summary":"","title":"Api","type":"tags"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/claude/","section":"Tags","summary":"","title":"Claude","type":"tags"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/developer-tools/","section":"Tags","summary":"","title":"Developer-Tools","type":"tags"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/groq/","section":"Tags","summary":"","title":"Groq","type":"tags"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/java/","section":"Categories","summary":"","title":"Java","type":"categories"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/java/","section":"Tags","summary":"","title":"Java","type":"tags"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/llm/","section":"Tags","summary":"","title":"Llm","type":"tags"},{"content":" Building an AI-Powered Assignment Assessment Tool # A full-stack web application that uses a large language model to give students structured, rubric-based feedback on their internship reports (praktikrapporter) for the Datamatiker education programme in Denmark.\nThe tool lets a teacher or student paste an assignment text into a web form. The backend sends it — together with a predefined rubric — to a Groq-hosted LLM, which returns structured JSON feedback. The frontend then renders this as a clear, colour-coded report.\nThe assessment covers five weighted criteria drawn from the Datamatiker study programme:\nCriterion Weight Opfyldelse af læringsmål fra studieordningen 25 % Faglig refleksion og teorianvendelse 25 % Personlig og professionel udvikling 20 % Praktikkens udbytte for virksomhed og studerende 20 % Struktur, sprog og formalia 10 % Each criterion is scored Lav / Middel / Høj with written justification referencing specific parts of the text.\nTech Stack # Layer Technology Backend language Java 17 Web framework Javalin 6.3 HTTP client OkHttp 4.12 JSON Jackson Databind 2.17 Build tool Maven Frontend React 18 + Vite 5 LLM provider Groq API (OpenAI-compatible) Default model llama-3.1-8b-instant Architecture # ┌─────────────────────────────────────────────────────────┐ │ React Frontend │ │ AssessmentForm ──► App ──► AssessmentResult │ └────────────────────────┬────────────────────────────────┘ │ POST /api/assess (JSON) ▼ ┌─────────────────────────────────────────────────────────┐ │ Javalin REST API :7070 │ │ │ │ AppConfig (CORS + routes) │ │ │ │ │ ▼ │ │ AssessmentController │ │ │ validates request │ │ ▼ │ │ AssessmentService │ │ │ loads rubric.json, builds prompt │ │ ▼ │ │ LLMService ──► Groq API ──► llama-3.1-8b-instant │ └─────────────────────────────────────────────────────────┘ Data flow # User submits assignment text via the React form. AssessmentController validates the request body is non-empty. AssessmentService reads the rubric from rubric.json and builds a structured prompt that includes the full rubric and the assignment text. LLMService calls the Groq API with a system prompt that instructs the model to return only a valid JSON object. The JSON response is stripped of any markdown code fences and deserialised into AssessmentResponse. A disclaimer is appended and the response is returned to the frontend. AssessmentResult renders the result as cards: overall level badge, per-criterion cards, strengths/weaknesses columns, improvement suggestions, and dialogue questions. API Endpoints # Method Path Description POST /api/assess Submit assignment text, receive structured assessment GET /api/rubric Return the full rubric as JSON GET /api/health Health check — returns OK POST /api/assess # Request body:\n{ \u0026#34;assignmentText\u0026#34;: \u0026#34;The full text of the student\u0026#39;s report...\u0026#34; } Response body:\n{ \u0026#34;overallAssessment\u0026#34;: \u0026#34;Short summary of the assessment...\u0026#34;, \u0026#34;overallLevel\u0026#34;: \u0026#34;Middel\u0026#34;, \u0026#34;criteriaFeedback\u0026#34;: [ { \u0026#34;criterionName\u0026#34;: \u0026#34;Opfyldelse af læringsmål fra studieordningen\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Høj\u0026#34;, \u0026#34;feedback\u0026#34;: \u0026#34;The student clearly documents...\u0026#34; } ], \u0026#34;strengths\u0026#34;: [\u0026#34;...\u0026#34;], \u0026#34;weaknesses\u0026#34;: [\u0026#34;...\u0026#34;], \u0026#34;improvementSuggestions\u0026#34;: [\u0026#34;...\u0026#34;], \u0026#34;dialogQuestions\u0026#34;: [\u0026#34;...\u0026#34;], \u0026#34;disclaimer\u0026#34;: \u0026#34;Dette er en vejledende AI-baseret vurdering og ikke en officiel eller endelig bedømmelse.\u0026#34; } Project Structure # llm-api/ ├── src/main/java/dat/ │ ├── Main.java # Entry point – starts Javalin on port 7070 │ ├── config/ │ │ └── AppConfig.java # CORS config + route registration │ ├── controllers/ │ │ └── AssessmentController.java # Request validation and response handling │ ├── services/ │ │ ├── AssessmentService.java # Rubric loading, prompt construction │ │ └── LLMService.java # Groq API integration via OkHttp │ ├── dtos/ │ │ ├── AssessmentRequest.java │ │ ├── AssessmentResponse.java │ │ └── CriterionFeedback.java │ └── models/ │ ├── Rubric.java # Rubric model + prompt serialisation │ └── RubricCriterion.java ├── src/main/resources/ │ └── rubric.json # Rubric definition (weights + level descriptions) ├── frontend/ │ ├── src/ │ │ ├── App.jsx # State management + API calls │ │ └── components/ │ │ ├── AssessmentForm.jsx # Text input form with character count │ │ └── AssessmentResult.jsx # Result cards with level badges │ ├── index.html │ └── vite.config.js ├── .env.example # Required environment variables └── pom.xml Getting Started # Prerequisites # Java 17+ Maven 3.8+ Node.js 18+ A Groq API key (free tier available) 1. Configure environment # cp .env.example .env # Edit .env and add your Groq API key: # GROQ_API_KEY=your_key_here # LLM_MODEL=llama-3.1-8b-instant 2. Start the backend # mvn compile exec:java -Dexec.mainClass=\u0026#34;dat.Main\u0026#34; # Server runs on http://localhost:7070 3. Start the frontend # cd frontend npm install npm run dev # Frontend runs on http://localhost:5173 Design Decisions # Groq over OpenAI — Groq\u0026rsquo;s inference API is OpenAI-compatible, free to get started with, and significantly faster for open-weight models like Llama 3.1. The LLMService can be pointed at any OpenAI-compatible endpoint by changing API_URL.\nRubric as JSON, not code — The rubric lives in rubric.json so it can be edited without recompiling. Rubric.toPromptString() serialises it into a human-readable block that the LLM understands reliably.\nStrict JSON output — The system prompt instructs the model to return only a JSON object. LLMService.cleanJsonResponse() strips markdown code fences as a safety net, since some models wrap JSON in triple backticks despite instructions.\nNo database — The application is stateless. Each request is self-contained: rubric + assignment text → LLM → response. This keeps deployment simple and avoids storing student data.\nDeployment # To deploy, add the Maven Shade Plugin to pom.xml to produce a fat JAR:\n\u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-shade-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.5.0\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;phase\u0026gt;package\u0026lt;/phase\u0026gt; \u0026lt;goals\u0026gt;\u0026lt;goal\u0026gt;shade\u0026lt;/goal\u0026gt;\u0026lt;/goals\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;transformers\u0026gt; \u0026lt;transformer implementation=\u0026#34;org.apache.maven.plugins.shade.resource.ManifestResourceTransformer\u0026#34;\u0026gt; \u0026lt;mainClass\u0026gt;dat.Main\u0026lt;/mainClass\u0026gt; \u0026lt;/transformer\u0026gt; \u0026lt;/transformers\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Then:\nmvn package # → target/llm-api-1.0-SNAPSHOT.jar cd frontend \u0026amp;\u0026amp; npm run build # → frontend/dist/ The JAR can be run on any server with java -jar llm-api-1.0-SNAPSHOT.jar. The frontend dist/ folder can be deployed to Netlify, Vercel, or served via nginx. For a single-service deployment, Javalin can be configured to serve the dist/ folder as static files.\n","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/llm-api/","section":"Posts","summary":"","title":"LLM API – AI Assignment Assessment","type":"posts"},{"content":" Fra autocomplete til agentic AI # Jeg har brugt GitHub Copilot en del – den slags AI der fuldfører en linje eller foreslår en funktion. Det er nyttigt, men det er stadig mig der navigerer, klipper, søger og fikser. Kodeagenter er noget andet.\nEn kodeagent læser filer, søger i kodebasen, retter fejl og kører kommandoer i ét hug, mens jeg beskriver hvad jeg vil have. Det var den oplevelse der overraskede mig mest, da jeg begyndte at bruge Claude Code.\nHvad er en kodeagent? # En kodeagent er et AI-system der ikke bare genererer tekst, men handler i et miljø. Den har adgang til værktøjer:\nVærktøj Hvad agenten gør Filsystem Læser og skriver filer Terminal Kører kommandoer og tests Søgning Finder relevante steder i kodebasen Web Slår dokumentation op Det betyder at agenten kan tage en opgave som \u0026ldquo;tilføj et endpoint der returnerer rubrikken som JSON\u0026rdquo; og selv finde ud af: hvilken fil tilhører det? Hvad er konventionen i projektet? Hvad mangler der? – og så bare gøre det.\nHvad overraskede mig # Den spørger, når det er uklart. Jeg forventede at agenten bare ville gætte og producere noget forkert. Men hvis opgaven er tvetydig, stopper den og afklarer – lidt som en kollega der ikke vil spilde tid på at løse det forkerte problem.\nDen ser på hele projektet. Copilot ser hvad der er åbent i editoren. Claude Code søger aktivt i hele kodebasen, tjekker afhængigheder og følger konventionerne den finder. Det giver en anden kvalitet af forslag.\nDen er ikke altid rigtig. Agenten laver fejl, ligesom alle andre. Den kan misforstå konteksten eller lave antagelser der ikke holder. Det er stadig mit ansvar at reviewe det den laver – jeg bare gør det hurtigere end jeg skriver fra bunden.\nÆndringer i workflow # Før agenter brugte jeg typisk denne loop:\nskriv kode → fejl → google → Stack Overflow → prøv igen Med en kodeagent ser loopen mere sådan ud:\nbeskriv problemet → review løsning → godkend eller korriger Det er en markant forskel i hvad jeg bruger mentale ressourcer på. Mindre syntaks, mere arkitektur og designbeslutninger.\nBegrænsninger jeg har stødt på # Store refaktoreringer kræver præcise instruktioner. Jo mere åben en opgave er, jo mere generisk bliver output. Agenten ved ikke hvad du ikke sagde. Hvis du glemmer at nævne et krav, glemmer den det også. Kontekstvinduet er ikke uendeligt. På store projekter kan agenten miste overblikket over filer den har set. Det handler i høj grad om at lære at prompte godt – at give nok kontekst til at agenten kan tage gode beslutninger.\nRefleksion # Kodeagenter er ikke \u0026ldquo;AI der erstatter programmøren\u0026rdquo;. De er et værktøj der forskyder arbejdet – fra at skrive kode til at beskrive og reviewe kode. Det kræver stadig at man forstår koden man godkender.\nFor mig har det gjort det nemmere at eksperimentere. Jeg tøver mindre med at prøve en ny tilgang, fordi iterationerne er hurtigere. Det har gjort projekter som RAG-automatiseringen og LLM API\u0026rsquo;et hurtigere at bygge end de ville have været ellers.\nJeg er stadig i begyndelsen af at forstå, hvornår agenter hjælper og hvornår de er i vejen. Men den første oplevelse var overbevisende nok til at de nu er en fast del af min arbejdsgang.\n","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/code-agents/","section":"Posts","summary":"","title":"Mine første oplevelser med kodeagenter","type":"posts"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/react/","section":"Tags","summary":"","title":"React","type":"tags"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/tools/","section":"Categories","summary":"","title":"Tools","type":"categories"},{"content":"","date":"29 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/workflow/","section":"Tags","summary":"","title":"Workflow","type":"tags"},{"content":"","date":"22 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/automated/","section":"Tags","summary":"","title":"Automated","type":"tags"},{"content":"","date":"22 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/chatbot/","section":"Tags","summary":"","title":"Chatbot","type":"tags"},{"content":"","date":"22 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/rag/","section":"Categories","summary":"","title":"RAG","type":"categories"},{"content":" Automated RAG Workflow for Keeping My Dify Chatbot in Sync with My Portfolio # Overview # In this post, I’ll walk through how I built an automated workflow that ensures my RAG (Retrieval-Augmented Generation) chatbot in Dify always reflects the latest content from my portfolio.\nThe goal was simple:\nWhenever I update my portfolio, my chatbot should automatically “know” about it.\nTo achieve this, I created a Java-based pipeline that:\nReads my sitemap Extracts all portfolio URLs Converts each page into clean Markdown Uploads the content into a Dify dataset Architecture # ⚙️ Technologies Used # Java OkHttp (HTTP client) Jsoup (XML parsing) Jina AI Reader API (HTML → Markdown) Dify API (dataset ingestion) 🔄 Step 1: Fetch URLs from Sitemap # I start by fetching my sitemap and extracting all \u0026lt;loc\u0026gt; elements using Jsoup.\npublic static List\u0026lt;String\u0026gt; fetchSitemapUrls(String sitemapUrl) throws IOException { Request request = new Request.Builder().url(sitemapUrl).build(); Response response = client.newCall(request).execute(); String xml = response.body().string(); Document doc = Jsoup.parse(xml, \u0026#34;\u0026#34;, org.jsoup.parser.Parser.xmlParser()); Elements locElements = doc.select(\u0026#34;loc\u0026#34;); List\u0026lt;String\u0026gt; urls = new ArrayList\u0026lt;\u0026gt;(); locElements.forEach(e -\u0026gt; urls.add(e.text())); return urls; } public static String fetchMarkdownFromJina(String url) throws IOException { String jinaUrl = JINA_API + url; Request request = new Request.Builder() .url(jinaUrl) .header(\u0026#34;Accept\u0026#34;, \u0026#34;text/plain\u0026#34;) .build(); Response response = client.newCall(request).execute(); if (!response.isSuccessful()) return null; return response.body().string(); } # Automated RAG Bot with Dify – Always in Sync with My Portfolio \u0026gt; **TL;DR:** I built a Java program that automatically reads all pages from my portfolio sitemap, converts them to Markdown via Jina AI, and uploads them to a Dify knowledge base — so my RAG chatbot always reflects the latest content on my site. --- ## Background I have a portfolio hosted on GitHub Pages and an AI chatbot built with [Dify](https://dify.ai) that visitors can use to ask questions about my work. The problem was: **every time I updated my portfolio, the chatbot was still answering based on outdated content.** The solution? An automated workflow that syncs content from my portfolio directly into Dify\u0026#39;s RAG knowledge base — with no manual intervention. --- ## Architecture \u0026amp; Flow The workflow consists of three steps: Portfolio (sitemap.xml) │ ▼ [1] Fetch all page URLs from sitemap │ ▼ [2] Convert each page to Markdown via Jina AI │ ▼ [3] Upload Markdown files to Dify dataset via API\nEverything is handled by a single Java program: `SitemapToDify.java`. --- ## Implementation ### Step 1 – Fetch URLs from the Sitemap I use `OkHttp` to fetch `sitemap.xml` and `Jsoup` to parse the XML and extract all `\u0026lt;loc\u0026gt;` tags: ```java public static List\u0026lt;String\u0026gt; fetchSitemapUrls(String sitemapUrl) throws IOException { Request request = new Request.Builder().url(sitemapUrl).build(); Response response = client.newCall(request).execute(); String xml = response.body().string(); Document doc = Jsoup.parse(xml, \u0026#34;\u0026#34;, org.jsoup.parser.Parser.xmlParser()); Elements locElements = doc.select(\u0026#34;loc\u0026#34;); List\u0026lt;String\u0026gt; urls = new ArrayList\u0026lt;\u0026gt;(); locElements.forEach(e -\u0026gt; urls.add(e.text())); return urls; } The result is a complete list of every page URL on my portfolio — automatically, with no hardcoding.\nStep 2 – Convert Pages to Markdown via Jina AI # Jina AI\u0026rsquo;s Reader API takes a URL and returns the page content as clean Markdown — perfect for RAG, since it strips away all the noise: HTML tags, navigation, footers, and ads:\npublic static String fetchMarkdownFromJina(String url) throws IOException { String jinaUrl = \u0026#34;https://r.jina.ai/\u0026#34; + url; Request request = new Request.Builder() .url(jinaUrl) .header(\u0026#34;Accept\u0026#34;, \u0026#34;text/plain\u0026#34;) .build(); Response response = client.newCall(request).execute(); if (!response.isSuccessful()) return null; return response.body().string(); } Just prefix any URL with https://r.jina.ai/ and Jina returns clean Markdown. Simple and effective.\nStep 3 – Upload to Dify Dataset # Each Markdown file is uploaded as a document to my Dify knowledge base via their REST API. I use multipart/form-data with two parts: the file itself and a JSON configuration string:\npublic static void uploadToDify(String url, String markdown) throws IOException { String fileName = url.replaceAll(\u0026#34;[^a-zA-Z0-9]\u0026#34;, \u0026#34;_\u0026#34;) + \u0026#34;.md\u0026#34;; String dataJson = new JSONObject() .put(\u0026#34;indexing_technique\u0026#34;, \u0026#34;high_quality\u0026#34;) .put(\u0026#34;doc_form\u0026#34;, \u0026#34;text_model\u0026#34;) .put(\u0026#34;doc_language\u0026#34;, \u0026#34;English\u0026#34;) .put(\u0026#34;process_rule\u0026#34;, new JSONObject().put(\u0026#34;mode\u0026#34;, \u0026#34;automatic\u0026#34;)) .toString(); RequestBody requestBody = new MultipartBody.Builder() .setType(MultipartBody.FORM) .addFormDataPart(\u0026#34;file\u0026#34;, fileName, RequestBody.create(markdown, MediaType.parse(\u0026#34;text/markdown\u0026#34;))) .addFormDataPart(\u0026#34;data\u0026#34;, dataJson) .build(); Request request = new Request.Builder() .url(DIFY_API) .post(requestBody) .addHeader(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + DIFY_API_KEY) .build(); client.newCall(request).execute(); } One important detail: the data field must be a serialized JSON string, not a nested object — this caught me off guard and took a while to figure out.\nMain Loop # Everything is tied together in main():\npublic static void main(String[] args) throws Exception { List\u0026lt;String\u0026gt; urls = fetchSitemapUrls(SITEMAP_URL); for (String url : urls) { System.out.println(\u0026#34;Processing: \u0026#34; + url); String markdown = fetchMarkdownFromJina(url); if (markdown != null \u0026amp;\u0026amp; !markdown.isEmpty()) { uploadToDify(url, markdown); } Thread.sleep(1000); // avoid rate limits } } I add a one-second delay between each page to avoid hitting rate limits on Jina\u0026rsquo;s API.\nTechnologies Used # Technology Purpose Java Programming language for the workflow OkHttp HTTP client for API calls Jsoup XML/HTML parsing of the sitemap Jina AI Reader Converting web pages to Markdown Dify RAG platform and chatbot engine GitHub Pages Portfolio hosting (Hugo) What I Learned # Jina AI Reader is underrated. Converting an entire web page to clean Markdown with a single API call is extremely useful for RAG pipelines — no scraping logic, no HTML parsing, just content.\nDify\u0026rsquo;s API is well-documented, but detail-oriented. The data field in the multipart request must be a serialized JSON string, not an object. Easy to overlook, but critical for the upload to succeed.\nAutomation pays off quickly. I can now run the script after every portfolio update — or set it up as a GitHub Action that triggers on every push — and my chatbot is always up to date.\nNext Steps # Set up the workflow as a GitHub Action that runs automatically on every push to the portfolio repo Add deletion of old documents in Dify before re-uploading, to prevent duplicate entries from accumulating Add language detection instead of hardcoding \u0026quot;English\u0026quot; The code is written as a standalone Java program. If you have questions about the implementation, feel free to reach out.\n","date":"22 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/rag-automated/","section":"Posts","summary":"","title":"RAG","type":"posts"},{"content":"","date":"22 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/rag/","section":"Tags","summary":"","title":"RAG","type":"tags"},{"content":"","date":"16 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/2--rag/","section":"2- RAG","summary":"","title":"2- RAG","type":"2--rag"},{"content":"","date":"16 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/2--rag/rag-intro/","section":"2- RAG","summary":"","title":"Rag Intro","type":"2--rag"},{"content":" Building a RAG Chatbot with Dify.ai # I recently built a Retrieval-Augmented Generation (RAG) chatbot using Dify.ai, a no-code AI application platform that simplifies the process of creating sophisticated AI solutions without extensive coding knowledge.\nWhat is Dify.ai? # Dify.ai is an open-source platform designed to streamline the development of AI applications. It provides an intuitive interface for building, testing, and deploying AI chatbots that can be enhanced with custom knowledge bases and external data sources.\nHow I Built the RAG Chatbot # Using Dify.ai, I was able to:\nCreate a Knowledge Base - Upload and manage custom documents that the chatbot can reference when answering questions Configure the AI Model - Select and configure the underlying language model for optimal performance Implement RAG - Set up retrieval-augmented generation to ensure responses are grounded in the provided documents and data Test and Iterate - Use the built-in testing interface to refine prompts and improve response quality Deploy - Launch the chatbot as an API or embed it directly into web applications Key Benefits # No Code Required - The visual interface eliminated the need for custom backend development Flexible Integration - Easy integration with various data sources and external APIs Rapid Prototyping - Quick iteration cycle from concept to working chatbot Scalability - Built-in support for handling multiple concurrent conversations This approach allowed me to focus on the domain knowledge and user experience rather than infrastructure and model implementation details.\n","date":"13 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/rag/","section":"Posts","summary":"","title":"RAG","type":"posts"},{"content":"","date":"10 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/agile/","section":"Tags","summary":"","title":"Agile","type":"tags"},{"content":"","date":"10 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/delphi/","section":"Tags","summary":"","title":"Delphi","type":"tags"},{"content":"","date":"10 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/estimation/","section":"Tags","summary":"","title":"Estimation","type":"tags"},{"content":"","date":"10 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/categories/planning/","section":"Categories","summary":"","title":"Planning","type":"categories"},{"content":"","date":"10 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/tags/planning/","section":"Tags","summary":"","title":"Planning","type":"tags"},{"content":" Marcus Rasmussen # Software Developer · github.com/Bubberr · linkedin.com/in/marcus-rasmussen-a346bb254\nProjects # Project Alpha # Full-stack web app · 2025\nA brief one or two sentence description of what the project does and why it exists. Focus on the problem it solves, not just the tech stack.\nTech: React, Node.js, PostgreSQL, Docker\nBuilt X feature that reduced Y by Z% Designed and implemented the REST API from scratch Deployed on AWS with CI/CD via GitHub Actions View on GitHub · Live demo\nProject Beta # CLI tool · 2024\nA brief one or two sentence description of what the project does and why it exists. Focus on the problem it solves, not just the tech stack.\nTech: Go, SQLite\nProcesses X input and outputs Y in under Zms Published to Homebrew and used by N people Includes comprehensive test coverage (\u0026gt;90%) View on GitHub\nProject Gamma # Open source library · 2024\nA brief one or two sentence description of what the project does and why it exists. Focus on the problem it solves, not just the tech stack.\nTech: TypeScript, Vite\nAdopted by N projects on GitHub Zero runtime dependencies Full TypeScript types with JSDoc coverage View on GitHub · npm\nGamedev Project # Pc game, Exam project 2026\nA 2D wave based game in a medieval setting, featuring normal enemies and a final boss.\nTech: Unity hub, C#\n2D wave based combat game Expandable wave system View on GitHub · App Store · Google Play\nLast updated: April 2026\n","date":"10 April 2026","externalUrl":null,"permalink":"/portfolio-hugo/posts/project/","section":"Posts","summary":"","title":"Projects","type":"posts"},{"content":" Marcus Rasmussen # Software Developer · Github · LinkedIn\nWelcome to my play ground. This is where I share my projects, blog posts, and more about my work as a student, developer and designer.\nExpect to find a mix of content here, from detailed project descriptions and case studies to more informal reflections and ideas. I use this space to document my learning journey, share insights from my work, and connect with others in the field.\n","date":"4 February 2026","externalUrl":null,"permalink":"/portfolio-hugo/about/","section":"Forside","summary":"","title":"About","type":"page"},{"content":"","externalUrl":null,"permalink":"/portfolio-hugo/posts/project-description/","section":"Posts","summary":"","title":"","type":"posts"},{"content":"","externalUrl":null,"permalink":"/portfolio-hugo/series/","section":"Series","summary":"","title":"Series","type":"series"}]