Understanding the Demo
This page offers a deep dive into the technologies and processes powering the Centering Theory Demo. It is tailored for developers and enthusiasts eager to understand the mechanics of Natural Language Processing (NLP) and discourse coherence analysis.
Frontend
The frontend is designed to provide a seamless and interactive user experience. Built using Next.js and React, it ensures fast rendering and a responsive UI. Below is an example of how a request is sent to the backend:
const handleExtractAttributes = async () => { const response = await fetch("/center", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ text: userInput }), }); const data = await response.json(); setAnalysisResult(data.results); };
This function sends user input to the backend for processing and updates the results state for rendering dynamic visualizations.
To bring the discourse relationships to life, the frontend leverages Framer Motion for smooth animations, MagicUI for stylish transitions, and Archer.js for rendering anchored relationships. For instance:
<ArcherContainer> <ArcherElement id="root"> <div>Root Element</div> </ArcherElement> <ArcherElement id="child" relations={[{ targetId: "root", sourceAnchor: "bottom", targetAnchor: "top" }]}> <div>Child Element</div> </ArcherElement> </ArcherContainer>
This structure ensures clarity in how elements are dynamically connected, making the discourse analysis intuitive and visually engaging.
Backend
The backend processes user input, resolves coreferences, and extracts discourse attributes. It is built using Flask for routing and AllenNLP for handling coreference resolution.
@app.route('/center', methods=['POST']) def center(): data = request.json if not data or 'text' not in data: return jsonify({'error': 'Invalid input'}), 400 text = data['text'] grouped_sentences, clusters, tokens = process_text(text) results = extract_centering(grouped_sentences, clusters, tokens) return jsonify({'sentences': grouped_sentences, 'results': results})
Example Request Body
{ "text": "John has been acting quite odd. He called up Mike yesterday. Mike was studying for his driver's test. He was annoyed by John's call." }
Example Response
{ "results": { "relations": [ { "sourceAnchor": "bottom", "sourceId": "word-0-0", "targetAnchor": "top", "targetId": "word-1-0" }, { "sourceAnchor": "bottom", "sourceId": "word-1-3", "targetAnchor": "top", "targetId": "word-2-0" }, { "sourceAnchor": "bottom", "sourceId": "word-1-3", "targetAnchor": "top", "targetId": "word-2-4" }, { "sourceAnchor": "bottom", "sourceId": "word-2-0", "targetAnchor": "top", "targetId": "word-3-0" } ], "results": [ { "Cb": null, "Cf": ["John"], "sentence": "John has been acting quite odd." }, { "Cb": "John", "Cf": ["He", "Mike", "called"], "sentence": "He called up Mike yesterday." }, { "Cb": "Mike", "Cf": ["Mike", "his"], "sentence": "Mike was studying for his driver's test." }, { "Cb": "Mike", "Cf": ["John's", "He", "John's call"], "sentence": "He was annoyed by John's call." } ] }, "sentences": [ "John has been acting quite odd.", "He called up Mike yesterday.", "Mike was studying for his driver's test.", "He was annoyed by John's call." ] }
Challenges and Limitations
Developing this demo posed challenges in balancing performance, accuracy, and user experience:
- Context-Sensitive Terms: Words like "this" or "it" require sophisticated modeling to resolve their meanings accurately.
- Computational Demand: Coreference resolution and discourse analysis require significant resources, which may limit scalability.
- Data Validation: Ensuring user input adheres to constraints is crucial for reliable processing.
System Overview
The demo operates on a well-integrated stack:
- React and Next.js for the frontend.
- Flask for backend routing.
- AllenNLP for advanced NLP tasks.
- Hosting on OVH servers for reliability.
This project is open source and welcomes contributions from the community. You can explore the codebases for both the frontend and backend on GitHub: