A heartfelt poem to guide toward simplicity, away from divisions and material chaos, and into a life filled with happiness and joy;
Wednesday, 3 September 2025
Tuesday, 2 September 2025
Getting Started with Terraform: Infrastructure as Code Made Simple (Part 1)
StandardHave you ever spent hours setting up servers, networks, or databases by clicking through endless dashboards, only to realize you have to repeat it all when something breaks?
This is where Infrastructure as Code (IaC) comes to the rescue and Terraform is one of the best tools out there to make it happen.
In this blog, we’ll cover:
- What Terraform is
- Why companies love it
- How it works under the hood
- A simple “Hello World” example to get started
What is Infrastructure as Code (IaC)?
Think of IaC as writing recipes for your infrastructure.
Instead of manually creating resources in AWS, Azure, or GCP, you write a configuration file describing what you need: servers, storage, security rules & everything.
Just like software code, this file can be:
- Version controlled in GitHub
- Reviewed by teammates
- Reused across projects
With IaC, your infrastructure setup becomes:
- Repeatable – Spin up identical environments with one command.
- Automated – Reduce human errors from manual setups.
- Documented – Your code is the documentation.
Why Terraform?
There are other IaC tools like AWS CloudFormation, Azure Resource Manager, or Ansible.
So why is Terraform such a big deal?
1. Multi-Cloud Support
Terraform works with AWS, Azure, GCP, Kubernetes, GitHub, Datadog… even DNS providers.
One tool, many platforms.
2. Declarative Syntax
You tell Terraform what you want, not how to do it.
For example:
"I want 1 S3 bucket."
Terraform figures out all the API calls for you.
3. State Management
Terraform keeps track of what exists in your cloud so it knows exactly what to change next time.
How Terraform Works (The Big Picture)
Terraform has a simple workflow:
Write → Plan → Apply
- Write: You write a configuration file in HCL (HashiCorp Configuration Language).
- Plan: Terraform shows what changes it will make (add, modify, delete).
- Apply: Terraform executes the plan and talks to the cloud provider APIs.
Case Study: A Startup Saves Time with Terraform
Imagine a small startup launching a new app.
- They need servers, databases, and storage on AWS.
- Their developer sets everything manually using the AWS Consol
- A month later, they want the same setup for testing.
Instead, they switch to Terraform:
- Create one Terraform script for the whole infrastructure.
- Reuse it for dev, staging, and production.
- Launch new environments in minutes, not hours.
That’s real-world productivity.
Installing Terraform
Step 1: Download
Go to terraform.io and download for Windows, macOS, or Linux.
Step 2: Verify
Open a terminal and type:
terraform -version
You should see something like:
Terraform v1.8.0
Your First Terraform Project: Hello World
Let’s create a simple AWS S3 bucket using Terraform.
main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-terraform-hello-world-bucket"
acl = "private"
}
Commands to Run
terraform init # Initialize Terraform project
terraform plan # See what will be created
terraform apply # Actually create the bucket
In a few seconds, you have a working S3 bucket.
No clicking through AWS Console.
Case Study: Enterprise-Level Use
At companies like Uber and Airbnb, Terraform manages thousands of servers.
- Developers write Terraform scripts.
- Changes go through GitHub pull requests.
- Once approved, Terraform automatically updates infrastructure.
Result?
Consistency across teams, fewer mistakes, and faster deployments.
Key Takeaways
- Terraform = Infrastructure automation made simple.
- It’s cloud-agnostic, declarative, and scalable.
- Perfect for both startups and enterprises.
What’s Next?
In the next blog, we’ll go hands-on:
- Create multiple resources
- Understand state files
- See how Terraform knows what to create, update, or delete
Bibliography
- HashiCorp. Terraform Documentation. Available at: https://www.terraform.io/docs
- Amazon Web Services. AWS S3 Documentation. Available at: https://docs.aws.amazon.com/s3
- Azure. Azure Resource Manager Documentation. Available at: https://learn.microsoft.com/en-us/azure/azure-resource-managerHashiCorp.
- Terraform Best Practices. Available at: https://developer.hashicorp.com/terraform/languageGoogle Cloud Platform.
- Google Cloud Documentation. Available at: https://cloud.google.com/docs
Tuesday, 26 August 2025
FreeRTOS on ESP32: Beginner's Guide with Features, Benefits & Practical Examples
StandardIntroduction
When developing embedded systems, managing tasks, timing, and resources efficiently becomes a challenge as the complexity of the application grows. This is where Real-Time Operating Systems (RTOS) come in.
FreeRTOS is one of the most popular open-source real-time operating systems for microcontrollers. It is small, fast, and easy to integrate into resource-constrained devices like the ESP32, making it ideal for IoT, automation, and robotics projects.
In this blog topic, we will cover:
- What FreeRTOS is
- Key features of FreeRTOS
- Why FreeRTOS is a good choice for ESP32 projects
- A hands-on example using ESP32
What is FreeRTOS?
FreeRTOS is a lightweight, real-time operating system kernel for embedded devices. It provides multitasking capabilities, letting you split your application into independent tasks (threads) that run seemingly in parallel.
For example, on ESP32, you can have:
- One task reading sensors
- Another handling Wi-Fi communication
- A third controlling LEDs
All running at the same time without interfering with each other.
Key Features of FreeRTOS
1. Multitasking with Priorities
FreeRTOS allows multiple tasks to run with different priorities. The scheduler ensures high-priority tasks get CPU time first, making it suitable for real-time applications.
2. Lightweight and Portable
The kernel is very small (a few KBs), making it ideal for microcontrollers like ESP32 with limited resources.
3. Preemptive and Cooperative Scheduling
- Preemptive: Higher priority tasks can interrupt lower ones.
- Cooperative: Tasks voluntarily give up CPU control.
This provides flexibility depending on your project needs.
4. Task Synchronization
Features like semaphores, mutexes, and queues help coordinate tasks and prevent resource conflicts.
5. Software Timers
Timers allow tasks to be triggered at regular intervals without blocking the main code.
6. Memory Management
Multiple memory allocation schemes let you optimize for speed or minimal memory fragmentation.
7. Extensive Hardware Support
FreeRTOS runs on 40+ architectures, including ARM Cortex-M, AVR, RISC-V, and of course, ESP32 (via the ESP-IDF framework).
Why Use FreeRTOS on ESP32?
The ESP32 has:
- Dual-core processor
- Wi-Fi + Bluetooth
- Plenty of GPIOs
With FreeRTOS, you can use these resources efficiently:
- Run Wi-Fi tasks on Core 0
- Handle sensor data on Core 1
- Keep the system responsive and organized
Example: Blinking LED Using FreeRTOS on ESP32
Below is a simple FreeRTOS example using ESP-IDF or Arduino IDE with the ESP32.
Code Example
#include <Arduino.h>
// Task Handles
TaskHandle_t Task1;
TaskHandle_t Task2;
// Task 1: Blink LED every 1 second
void TaskBlink1(void *pvParameters) {
pinMode(2, OUTPUT); // Onboard LED
while (1) {
digitalWrite(2, HIGH);
vTaskDelay(1000 / portTICK_PERIOD_MS); // 1 second delay
digitalWrite(2, LOW);
vTaskDelay(1000 / portTICK_PERIOD_MS);
}
}
// Task 2: Print message every 2 seconds
void TaskPrint(void *pvParameters) {
while (1) {
Serial.println("Task 2 is running!");
vTaskDelay(2000 / portTICK_PERIOD_MS);
}
}
void setup() {
Serial.begin(115200);
// Create two FreeRTOS tasks
xTaskCreate(TaskBlink1, "Blink Task", 1000, NULL, 1, &Task1);
xTaskCreate(TaskPrint, "Print Task", 1000, NULL, 1, &Task2);
}
void loop() {
// Nothing here - tasks handle everything
}
How the Code Works
xTaskCreate: Creates a FreeRTOS task. Each task runs independently.vTaskDelay: Delays a task without blocking others.- Two tasks:
- Task 1 blinks the LED every second.
- Task 2 prints a message every two seconds.
Both tasks run in parallel on the ESP32.
In Diagramatically shown below:
The above diagram represents;
- Groups tasks clearly by Core 0 (Network/IO) and Core 1 (Control/Timing).
- Places shared Queue/Event Group in the center.
- Shows ISR → Queue → Tasks data flow with minimal arrows for clarity.
1) Pin Tasks to Cores + Precise Periodic Scheduling
Use xTaskCreatePinnedToCore to control where tasks run and vTaskDelayUntil for jitter-free loops.
#include <Arduino.h>
TaskHandle_t sensorTaskHandle, wifiTaskHandle;
void sensorTask(void *pv) {
const TickType_t period = pdMS_TO_TICKS(10); // 100 Hz
TickType_t last = xTaskGetTickCount();
for (;;) {
// read sensor here
// ...
vTaskDelayUntil(&last, period);
}
}
void wifiTask(void *pv) {
for (;;) {
// handle WiFi / MQTT here
vTaskDelay(pdMS_TO_TICKS(50));
}
}
void setup() {
Serial.begin(115200);
// Run time-critical sensor task on Core 1, comms on Core 0
xTaskCreatePinnedToCore(sensorTask, "sensor", 2048, NULL, 3, &sensorTaskHandle, 1);
xTaskCreatePinnedToCore(wifiTask, "wifi", 4096, NULL, 2, &wifiTaskHandle, 0);
}
void loop() {}
Why it’s useful: keep deterministic work (sensors/control) isolated from network stacks.
2) Queues: From ISR to Task (Button → LED)
Move edge events out of the ISR using queues and process them safely in a task.
#include <Arduino.h>
static QueueHandle_t buttonQueue;
const int BTN_PIN = 0; // adjust for your board
const int LED_PIN = 2;
void IRAM_ATTR onButtonISR() {
uint32_t tick = millis();
BaseType_t hpTaskWoken = pdFALSE;
xQueueSendFromISR(buttonQueue, &tick, &hpTaskWoken);
if (hpTaskWoken) portYIELD_FROM_ISR();
}
void ledTask(void *pv) {
pinMode(LED_PIN, OUTPUT);
uint32_t eventTime;
for (;;) {
if (xQueueReceive(buttonQueue, &eventTime, portMAX_DELAY) == pdPASS) {
// simple action: blink LED on each press
digitalWrite(LED_PIN, !digitalRead(LED_PIN));
Serial.printf("Button @ %lu ms\n", eventTime);
}
}
}
void setup() {
Serial.begin(115200);
pinMode(BTN_PIN, INPUT_PULLUP);
buttonQueue = xQueueCreate(8, sizeof(uint32_t));
attachInterrupt(digitalPinToInterrupt(BTN_PIN), onButtonISR, FALLING);
xTaskCreate(ledTask, "ledTask", 2048, NULL, 2, NULL);
}
void loop() {}
Tip: keep ISRs tiny; send data to tasks via queues.
3) Mutex: Protect Shared Resources (Serial / I²C / SPI)
Avoid interleaved prints or bus collisions with a mutex.
#include <Arduino.h>
SemaphoreHandle_t ioMutex;
void chatterTask(void *pv) {
const char *name = (const char*)pv;
for (;;) {
if (xSemaphoreTake(ioMutex, pdMS_TO_TICKS(50)) == pdTRUE) {
Serial.printf("[%s] hello\n", name);
xSemaphoreGive(ioMutex);
}
vTaskDelay(pdMS_TO_TICKS(200));
}
}
void setup() {
Serial.begin(115200);
ioMutex = xSemaphoreCreateMutex();
xTaskCreate(chatterTask, "chat1", 2048, (void*)"T1", 1, NULL);
xTaskCreate(chatterTask, "chat2", 2048, (void*)"T2", 1, NULL);
}
void loop() {}
Why it’s useful: prevents priority inversion and corrupted I/O.
4) Binary Semaphore: Signal Readiness (Wi-Fi Connected → Start Task)
Use a binary semaphore to gate a task until some condition is met.
#include <Arduino.h>
SemaphoreHandle_t wifiReady;
void workerTask(void *pv) {
// wait until Wi-Fi is ready
xSemaphoreTake(wifiReady, portMAX_DELAY);
Serial.println("WiFi ready, starting cloud sync…");
for (;;) {
// do cloud work
vTaskDelay(pdMS_TO_TICKS(1000));
}
}
void setup() {
Serial.begin(115200);
wifiReady = xSemaphoreCreateBinary();
// simulate Wi-Fi connect on another task/timer
xTaskCreate([](void*){
vTaskDelay(pdMS_TO_TICKS(2000)); // pretend connect delay
xSemaphoreGive(wifiReady);
vTaskDelete(NULL);
}, "wifiSim", 2048, NULL, 2, NULL);
xTaskCreate(workerTask, "worker", 4096, NULL, 2, NULL);
}
void loop() {}
5) Event Groups: Wait for Multiple Conditions
Synchronize on multiple bits (e.g., Wi-Fi + Sensor) before proceeding.
#include <Arduino.h>
#include "freertos/event_groups.h"
EventGroupHandle_t appEvents;
const int WIFI_READY_BIT = BIT0;
const int SENSOR_READY_BIT= BIT1;
void setup() {
Serial.begin(115200);
appEvents = xEventGroupCreate();
// Simulate async readiness
xTaskCreate([](void*){
vTaskDelay(pdMS_TO_TICKS(1500));
xEventGroupSetBits(appEvents, WIFI_READY_BIT);
vTaskDelete(NULL);
}, "wifi", 2048, NULL, 2, NULL);
xTaskCreate([](void*){
vTaskDelay(pdMS_TO_TICKS(800));
xEventGroupSetBits(appEvents, SENSOR_READY_BIT);
vTaskDelete(NULL);
}, "sensor", 2048, NULL, 2, NULL);
// Wait for both bits
xTaskCreate([](void*){
EventBits_t bits = xEventGroupWaitBits(
appEvents, WIFI_READY_BIT | SENSOR_READY_BIT,
pdFALSE, /* don't clear */
pdTRUE, /* wait for all */
portMAX_DELAY
);
Serial.printf("Ready! bits=0x%02x\n", bits);
vTaskDelete(NULL);
}, "gate", 2048, NULL, 3, NULL);
}
void loop() {}
6) Software Timers: Non-Blocking Periodic Work
Use xTimerCreate for periodic or one-shot jobs without dedicating a full task.
#include <Arduino.h>
TimerHandle_t blinkTimer;
const int LED = 2;
void blinkCb(TimerHandle_t) {
digitalWrite(LED, !digitalRead(LED));
}
void setup() {
pinMode(LED, OUTPUT);
blinkTimer = xTimerCreate("blink", pdMS_TO_TICKS(250), pdTRUE, NULL, blinkCb);
xTimerStart(blinkTimer, 0);
}
void loop() {}
Why it’s useful: frees CPU and stack compared to a dedicated blink task.
7) Task Notifications: Fast 1-to-1 Signal (Lighter than Queues)
Direct-to-task notifications are like super-light binary semaphores.
#include <Arduino.h>
TaskHandle_t workTaskHandle;
void IRAM_ATTR quickISR() {
BaseType_t xHigher = pdFALSE;
vTaskNotifyGiveFromISR(workTaskHandle, &xHigher);
if (xHigher) portYIELD_FROM_ISR();
}
void workTask(void *pv) {
for (;;) {
ulTaskNotifyTake(pdTRUE, portMAX_DELAY); // waits, clears on take
// handle event fast
Serial.println("Notified!");
}
}
void setup() {
Serial.begin(115200);
xTaskCreate(workTask, "work", 2048, NULL, 3, &workTaskHandle);
// simulate an interrupt source using a timer
hw_timer_t *timer = timerBegin(0, 80, true); // 1 us tick
timerAttachInterrupt(timer, &quickISR, true);
timerAlarmWrite(timer, 500000, true); // 500ms
timerAlarmEnable(timer);
}
void loop() {}
8) Producer–Consumer with Queue + Backpressure
Avoid overruns by letting the queue throttle the producer.
#include <Arduino.h>
QueueHandle_t dataQ;
void producer(void *pv) {
uint16_t sample = 0;
for (;;) {
sample++;
if (xQueueSend(dataQ, &sample, pdMS_TO_TICKS(10)) != pdPASS) {
// queue full -> dropped (or handle differently)
}
vTaskDelay(pdMS_TO_TICKS(5)); // 200 Hz
}
}
void consumer(void *pv) {
uint16_t s;
for (;;) {
if (xQueueReceive(dataQ, &s, portMAX_DELAY) == pdPASS) {
// heavy processing
vTaskDelay(pdMS_TO_TICKS(20)); // slower than producer
Serial.printf("Processed %u\n", s);
}
}
}
void setup() {
Serial.begin(115200);
dataQ = xQueueCreate(16, sizeof(uint16_t));
xTaskCreatePinnedToCore(producer, "prod", 2048, NULL, 2, NULL, 1);
xTaskCreatePinnedToCore(consumer, "cons", 4096, NULL, 2, NULL, 0);
}
void loop() {}
9) Watchdog-Friendly Yields in Busy Tasks
Long loops should yield to avoid soft WDT resets and keep the system responsive.
#include <Arduino.h>
void heavyTask(void *pv) {
for (;;) {
// do chunks of work…
// ...
vTaskDelay(1); // yield to scheduler (~1 tick)
}
}
void setup() {
xTaskCreate(heavyTask, "heavy", 4096, NULL, 1, NULL);
}
void loop() {}
10) Minimal ESP-IDF Style (for reference)
If you’re on ESP-IDF directly:
// C (ESP-IDF)
void app_main(void) {
xTaskCreatePinnedToCore(taskA, "taskA", 2048, NULL, 3, NULL, 1);
xTaskCreatePinnedToCore(taskB, "taskB", 4096, NULL, 2, NULL, 0);
}
APIs are the same FreeRTOS ones; you’ll use ESP-IDF drivers (I2C, ADC, Wi-Fi) instead of Arduino wrappers.
Practical Stack/Perf Tips
- Start with 2 ~ 4 KB stack per task; raise if you see resets. Use
uxTaskGetStackHighWaterMark(NULL)to check headroom.
- Prefer task notifications over queues for single-bit triggers; they’re faster and lighter.
- Keep ISRs tiny; do work in tasks.
- Use
vTaskDelayUntilfor fixed-rate loops (control systems).
- Group readiness with Event Groups; single readiness with binary semaphores.
Real-World Use Cases on ESP32
- Home Automation: Sensor monitoring + Wi-Fi communication + relay control.
- Industrial IoT: Data acquisition + edge processing + cloud integration.
- Wearables: Health data collection + Bluetooth communication.
FreeRTOS turns your ESP32 into a powerful multitasking device capable of handling complex, real-time applications. Its lightweight nature, multitasking support, and rich feature set make it perfect for IoT, robotics, and industrial projects.
By starting with simple tasks like LED blinking, you can gradually build more complex systems involving sensors, communication, and user interfaces; all running smoothly on FreeRTOS.
Bibliography
- FreeRTOS Official Documentation– Reference for API usage and concepts.
- Espressif ESP-IDF Programming Guide– For ESP32-specific FreeRTOS features.
- Arduino-ESP32 Core– Arduino core for ESP32 with FreeRTOS integration.
- Richard Barry, Mastering the FreeRTOS Real-Time Kernel, Real Time Engineers Ltd.
Sunday, 24 August 2025
🤝Rebuilding Real-World Connection: Beyond Virtual Spaces & AI Bots
StandardWe live in the most connected era in human history, yet so many of us feel profoundly disconnected. Our phones ping all day, video calls bring distant faces close, and social platforms keep us updated on everyone’s lives. Still, when the screen goes dark, loneliness often lingers.
It is the paradox of modern living: we are digitally close but emotionally far.
Over the past decade, virtual spaces have given us incredible tools ways to meet new people, work remotely, and even find communities we never knew existed. But as powerful as these platforms are, they cannot fully replace the richness of a real conversation, a shared meal, or laughter that echoes in the same room. That is why more people are now seeking ways to rebuild real-world connections, not by rejecting technology but by going beyond it.
Why Digital Connections Fall Short
Virtual interactions often strip away the subtle layers of human connection.
- A text message cannot convey the warmth of tone in someone’s voice.
- A “like” on a photo does not equal genuine support.
- Even video calls, while better, cannot replicate the comfort of presence.
Over time, this creates relationships that feel surface level. They are easy to maintain, but they often lack depth. And as research shows, shallow connections can leave us feeling lonelier than no connection at all.
The Rise of Hybrid Belonging
The solution is not abandoning virtual spaces but blending them with real-world experiences. Some of the most promising movements today focus on hybrid connection models:
- Virtual Living Rooms: Online groups that coordinate local meetups, where people move from chatrooms to cafés or parks.
- Community Hubs: Startups and nonprofits are creating “third places”—spaces beyond home and work where people gather for conversation, hobbies, or learning.
- Digital to Physical Rituals: Book clubs that meet online weekly but hold quarterly in-person events, or gaming communities that plan offline tournaments.
Here, technology acts as a bridge, not a replacement. It helps us find people, then nudges us back into the real world.
The Hidden Risks of Virtual Bonds
While digital platforms promise connection, they also open the door to manipulation and misleading relationships. Social networks are filled with digital creators, influencers, and sometimes fake users who shape how we feel and act.
- Creators and Influencers: Many people form emotional attachments to digital creators they follow, but the relationship is often one-sided. Updates, posts, and status stories may give the illusion of intimacy, while in reality the creator may not even see individual followers as people, but as part of an audience.
- Fake Profiles: Bots and fake users exploit trust, manipulating emotions, money, or attention. These interactions can feel real in the moment but leave people drained and questioning themselves later.
- Misleading Signals: A like, a viewed status, or an occasional update can feel like subtle communication, but in reality, it might be nothing more than an algorithm-driven interaction.
A Real Case Study: Experiment for making a meaningful connection on Instagram
In one of our personal experiments, we decided to test how genuine digital bonds really are. We have spent nearly 0.6 ~ 1 years for this experiment. We tried to form a meaningful connection with a digital creator through direct messages.
At first, it felt promising we sent thoughtful messages, hoping to build a bond. But instead of replies, what we received were mixed signals: updates on her status, occasional posts that seemed like indirect responses, and silence in the inbox.
This left us with questions:
- Was the account even being managed by the creator herself?
- Or was it controlled by a team, or perhaps even automated tools?
- Were We trying to connect with a person, or just chasing the reflection of an online persona?
- If the account is maintained by a third-party content manager, shouldn’t that be clearly mentioned in the Instagram bio?
- If the digital creator’s account belonged to a female creator, How does the likes from that account often felt as though they were being made by a male person?
- If someone truly wants to connect with a profile, they should reply directly to DMs. And if the creator feels irritated by unwanted messages, they always have the option to block that account.
The more we thought about it, the more we realized how fragile these digital bonds can be. Every status update felt like it was speaking to our experimental instagram profile, yet it could just as easily have been a generic post meant for thousands of followers. Every silence made us wonder if it was personal, or simply indifference lost in the noise of endless notifications.
At times, we even questioned if we were was interacting with a real individual at all. Could it have been a content manager scheduling posts, or maybe even an fake-individual-driven engagement designed to keep the account active and “alive”? The line between authenticity and performance felt thinner with each interaction.
That’s when it strucks us: maybe what we were chasing wasn’t the creator herself, but the carefully curated illusion of connection that social media network thrives on.
The experience revealed something important: virtual signals are not always real connections. They can manipulate emotions, encourage us to read meaning where there may be none, and ultimately leave us feeling uncertain about reality.
The Role of AI Companionship (and Its Limits)
We cannot ignore the rise of AI companion apps designed to chat, listen, and even mimic friendship. For some, these tools fill a gap. But while an AI can simulate empathy, it cannot be human. It cannot share your silence in a park, give you a knowing look, or surprise you with its imperfections.
Real connection is messy, unpredictable, and wonderfully human. AI may supplement, but it can never substitute.
Why This Matters
These experiments highlight the fragility of online bonds. When trust is absent, and when interactions are filtered through algorithms or managers, relationships can quickly shift from hopeful to hollow. This is not just about one failed connection, it’s about a larger truth: our digital world is full of blurred lines between authenticity and performance.
That’s why rebuilding real-world connection is so critical. Offline, we don’t have to wonder if someone is “really” behind the screen. A smile, a handshake, or even a pause in conversation carries authenticity that a status update never can.
Practical Ways to Rebuild Real-World Bonds
So, how do we start moving beyond screens? Here are a few simple shifts:
- Prioritize Face-to-Face Moments: Schedule coffee with a friend instead of just texting “how are you?”
- Phone-Call or What's app Talks: A real-time conversation carries far more warmth than a silent like or a delayed reply. Hearing someone’s voice, even through a short call, can bridge distance and add depth to relationships.
- Rediscover Local Spaces: Libraries, community centers, sports clubs, even neighborhood walks—small places can spark real interactions.
- Host Without Perfection: Invite people over, even if your home is not spotless. Connection thrives in authenticity, not performance.
- Digital Boundaries: Set intentional limits, like “no phones at dinner,” to reclaim presence in shared spaces.
Choosing Depth Over Noise
Virtual platforms are not the enemy. They have given us incredible opportunities to connect. But if we want to feel truly alive, we have to step back into real spaces where hugs replace emojis, eye contact says more than words, and friendships are built not on algorithms but on time and trust.
The future of connection is not about abandoning technology. It is about using it wisely not as the final destination, but as a doorway that leads us back to what we have always needed most: each other.
Bibliography
- Image Collected from: https://www.gatewayofhealing.com/wp-content/uploads/2024/05/how-to-build-healthy-relationships-foundations-partnerships.jpg
Thursday, 21 August 2025
How AI Will Transform the IB Design Cycle From MYP to DP for K-12 Students
StandardIntroduction – The Human & AI Creative Duo
Picture an IB classroom where students from core subjects to creative design are sketching, ideating, and prototyping. Now, imagine AI beside them: offering thoughtful suggestions, sparking new ideas, and guiding reflection but never replacing their creativity. This is the future of IB design education across both MYP and DP: AI as the silent collaborator, amplifying human ingenuity.
Let’s explore how AI can elevate each stage of both design cycles, guided by human-centered examples and real-world contexts.
MYP Design Cycle: A Structured Launchpad for Creativity
In the MYP, students follow a four-step cycle:
Inquiring & Analyzing → Developing Ideas → Creating the Solution → Evaluating (CASIE).
1. Inquiring & Analyzing
How AI helps:
- Boosts research depth, offering smart summaries, relevant examples, and potential directions.
- Fosters AI literacy, prompting questions like: what does AI include and what does it miss?
Example:
At a primary school in England, students’ descriptions are transformed into AI-generated images—sparking rich inquiry and letting language fuel creative exploration. (Prince George's County Public Schools)
2. Developing Ideas
How AI helps:
- Acts as a creative co-pilot, remixing ideas, suggesting “what-if?” pathways.
- Mirrors professional generative design, exploring multiple design variations. (Wikipedia, Prince George's County Public Schools)
Case Study:
AiDLab in Hong Kong empowers fashion students with AI tools, democratising design and helping small creators innovate faster. (CASIE)
3. Creating the Solution
How AI helps:
- Supports prototyping with smart suggestions, progress monitoring, and design scaffolds.
- Treats AI as a co-creator, blending its strengths with human intention. (Wikipedia)
Case Study:
At Universiti Malaysia Kelantan, AI-enhanced creative technology courses helped students work across media, integrating digital arts and design seamlessly. (International Baccalaureate®)
4. Evaluating
How AI helps:
- Enables simulations of user interaction or functionality, giving students more data to reflect on.
- Offers reflective prompts: “What worked?”, “What could be improved?”
Example:
In New York, AI was used behind the scenes to build responsive lessons for 6th graders helping teachers save time and foster student reflection. (Wikipedia)
DP Design Cycle: Higher Expectations, Deeper Inquiry
In the DP Design Technology, students engage in a similar yet more advanced cycle: Analysis → Design Development → Synthesis → Evaluation (International Baccalaureate®).
It emphasizes sophisticated design thinking, critical inquiry, and real-world impact through projects like the internally assessed design task that accounts for 40% of the grade (International Baccalaureate®).
1. Analysis / Inquiring & Analyzing
How AI helps:
- Offers data insights to sharpen problem definition—user needs, constraints, and design briefs.
- Encourages ethical inquiry: “Who benefits?”, “What are unintended consequences?”
2. Design Development / Developing Ideas
How AI helps:
- Enables rapid concept iteration with constraints like ergonomics, sustainability, or materials.
- Simulates user-centered design scenarios to develop human-centered solutions.
3. Synthesis / Creating the Solution
How AI helps:
- Assists in drafting prototypes (digital or conceptual) with feedback loops.
- Supports reflection on sustainability and commercial viability—major DP themes. (Wikipedia)
4. Evaluation
How AI helps:
- Simulates market or user reactions.
- Helps students critique their own designs via AI-generated rubrics, aligning with DP assessment criteria. (International Baccalaureate®)
Summary Table: AI Across IB Design Cycles
| IB Programme | Design Stage | Role of AI | Real-world Inspiration |
|---|---|---|---|
| MYP | Inquire & Analyze | Research augmentation, AI literacy | AI-generated visuals from writing (UK) |
| Develop Ideas | Creative partner, generative design prompts | AiDLab fashion ideation (Hong Kong) | |
| Create Solution | Smart prototyping guidance | AI-enabled course creation (Malaysia) | |
| Evaluate | Simulations, reflective prompting | AI-driven lesson feedback (NY schools) | |
| DP | Analysis | Insightful problem framing, ethical inquiry | AI supports briefing phases |
| Design Development | Concept iteration with constraints | Handles ergonomics, sustainability | |
| Synthesis | Prototype assistance, viability simulations | Focuses on sustainability/commercial logic | |
| Evaluate | Testing, AI critique, rubric alignment | Meets DP criteria via AI support |
Human-Centered, AI-Enhanced Learning
In both MYP and DP design, AI isn’t a shortcut—it’s a catalyst. It:
- Enriches inquiry (asking better questions).
- Amplifies creative exploration (more possibilities).
- Accelerates prototyping and iteration.
- Deepens reflective evaluation.
With strong ethical frameworks, access equity, and thoughtful integration, AI can become a trusted co-designer, not an all-powerful replacement.
Got it. Let’s map specific AI tools directly to PYP, MYP, and DP Design Cycles with real-world alignment, so you have a practical guide for K-12 integration. I’ll break it down program by program, showing how AI tools support each stage with examples, benefits, and usage cases.
AI Tools Across IB Design Cycles: Practical Integration Guide
1. PYP (Primary Years Programme): Early Inquiry & Exploration
At this stage, students are developing foundational curiosity, creativity, and reflection. AI tools should be simple, visual, and playful.
| PYP Design Stage | AI Tool Example | How It Helps | Real Classroom Use Case |
|---|---|---|---|
| Inquire & Analyze | ChatGPT Edu, Curipod | Turns student questions into child-friendly explanations. | 2nd graders ask “Why do plants need sun?” → AI gives stories & images. |
| Develop Ideas | DALL·E, Canva Magic Design | Creates visuals from student sketches or descriptions. | Students imagine “a robot gardener,” see multiple AI visuals. |
| Create the Solution | Scratch + AI extensions | Code simple interactive stories with AI character generation. | PYP tech club codes storytelling robots with AI voiceovers. |
| Evaluate | Mentimeter, Kahoot AI | Quick AI quizzes for peer feedback. | Students vote on best robot designs, AI summarizes insights. |
Example:
A 4th-grade class in Singapore used Curipod to turn their water conservation ideas into storyboards with AI illustrations. Kids voted on the most impactful design before prototyping a simple model.
2. MYP (Middle Years Programme): Structured Design Thinking
MYP students handle bigger challenges, so AI tools should support research depth, idea generation, and real-time prototyping.
| MYP Design Stage | AI Tool Example | How It Helps | Real Classroom Use Case |
|---|---|---|---|
| Inquire & Analyze | Perplexity AI, ChatGPT Edu | Summarizes sources, suggests analysis angles, cites references | Students exploring plastic waste design eco-friendly packaging. |
| Develop Ideas | RunwayML, MidJourney | Generates concept visuals & animations for brainstorming. | AI suggests 3D packaging prototypes before finalizing. |
| Create the Solution | TinkerCAD + AI plug-ins | AI recommends material choices or design tweaks. | Students 3D print AI-refined prototypes for eco-designs. |
| Evaluate | ChatGPT Custom GPTs, Gradescope AI | Simulates user feedback & generates reflective questions. | Students analyze why their designs failed water tests. |
Case Study:
At a Hong Kong IB school, students designed AI-powered recycling bins. AI suggested multiple prototypes; students tested sensors with real users, then refined designs based on AI-simulated user interactions.
3. DP (Diploma Programme): Complex, Real-World Problem Solving
DP Design Tech projects demand rigor, ethical reasoning, and professional-level prototyping. AI here becomes a research partner, co-designer, and evaluator.
| DP Design Stage | AI Tool Example | How It Helps | Real Classroom Use Case |
|---|---|---|---|
| Analysis | ChatGPT Edu + ScholarAI | Summarizes academic research, generates ethical debate points. | Students researching biomimicry-inspired architecture. |
| Design Development | Fusion 360 with AI extensions | Suggests multiple structural or ergonomic design variations. | AI optimizes weight-bearing prototypes for a bridge. |
| Synthesis | RunwayML, Adobe Firefly | Creates marketing visuals, AR/VR simulations for product demos. | Students create AI-driven virtual reality prototypes. |
| Evaluation | Gradescope AI, ChatGPT Rubric Generator | Aligns student work with IB DP criteria, offers improvement tips. | AI suggests rubric-aligned feedback on design reports. |
Case Study:
A DP team in Canada designed a solar-powered smart bench. AI optimized panel angles, simulated energy output in various weather conditions, and suggested cost-efficient materials reducing iteration time by 40%.
Cross-Programme Benefits of AI Integration
- Saves time on research & prototyping → More focus on creativity & ethics.
- Democratizes access → Smaller schools access design expertise through AI tools.
- Encourages reflection → AI prompts “why” questions, not just “how” solutions.
- Fosters interdisciplinary skills → Merges science, technology, ethics, and arts.
- International Baccalaureate Organization. (2023). Design and Technology: Diploma Programme curriculum framework. Retrieved from https://www.ibo.org
- International Baccalaureate Organization. (2023). MYP Design Cycle framework. Retrieved from https://www.ibo.org/programmes/middle-years-programme/curriculum/design/
- Massachusetts Department of Education. (2023). Artificial Intelligence in K-12 Education: Opportunities and Guidance. Retrieved from https://www.doe.mass.edu
- AiDLab Hong Kong. (2023). AI in Fashion Innovation and Design. Retrieved from https://aidlab.hk
- Digital Promise. (2024). Transforming K-12 Education with AI: Insights from 28 Exploratory Projects. Retrieved from https://digitalpromise.org
- The Guardian. (2025). AI in Primary Education: Creativity and Literacy. Retrieved from https://theguardian.com
- SpringerOpen. (2023). AI Literacy in K-12 Education: Pedagogical Challenges and Opportunities. STEM Education Journal. Retrieved from https://stemeducationjournal.springeropen.com
- ResearchGate. (2024). AI Integration in Creative Technology Courses: A Case Study in Malaysia. Retrieved from https://researchgate.net
Wednesday, 20 August 2025
The Future of Design Thinking in the Age of AI
StandardDesign Thinking has long been one of the most powerful human-centered methodologies for innovation. It’s a cyclical process of empathizing with users, defining their problems, ideating solutions, prototyping, and testing. What makes it unique is its focus on people first technology and business follow after.
But in the age of generative AI, this process is being fundamentally reimagined. AI is not here to replace designers or innovators, it’s a new creative collaborator that amplifies what humans already do best: empathy, problem-solving, and imagination.
Prototyping: From Manual Work to Instant Iteration
The prototyping phase: the “make it real” step is where AI is making some of the most visible impact. Traditionally, creating a high-fidelity prototype could take days or even weeks of wireframing, pixel pushing, and manual refinement. Today, with the right prompts, a designer can generate dozens of variations in minutes.
Case Study: Automating UI/UX Design
Tools like Uizard and Relume AI allow designers to upload a rough sketch or write a simple text prompt like:
“Design a mobile app interface for a fitness tracker with a clean, minimalist aesthetic.”
In seconds, the AI generates fully fleshed-out interfaces complete with layouts, color schemes, and even sample content. Designers can then test multiple versions with users, collect feedback quickly, and refine the best direction.
The result? The design-to-testing loop shortens dramatically. Designers spend less time perfecting the how and more time focusing on the why: understanding the user and creating meaningful experiences.
Ideation: Beyond the Human Brainstorm
Ideation or the brainstorming phase has always thrived on volume. The more ideas you generate, the greater the chances of finding a breakthrough. But human teams often plateau after a few dozen concepts. Generative AI, however, can serve as an idea engine that never runs out of fuel.
Example: A “How Might We…” Framework on Steroids
Take the challenge: “How might we make grocery shopping more sustainable?”
A traditional brainstorm might yield a dozen ideas, some practical and others far-fetched. With AI, a team can feed in user insights, market research, and competitive data. In return, the AI produces hundreds of potential solutions ranging from AI-driven meal planners that reduce food waste to smart carts that calculate carbon footprints in real time.
This flood of ideas isn’t meant to replace human creativity but to expand it. Designers shift roles from being sole inventors to curators and strategists, filtering and refining the most promising directions while bringing in human empathy and context.
Testing: Predictive and Proactive Feedback
Testing with real users remains a cornerstone of Design Thinking. But AI can make the process faster, broader, and more predictive.
Case Study: L’Oréal’s Predictive Product Testing
L’Oréal used generative AI to create virtual beauty assistants and marketing content at scale. By analyzing how users interacted with these digital experiences, they collected real-time insights long before manufacturing a single product. This helped them identify trends early and accelerate time-to-market by nearly 60%.
AI also enables virtual testing environments, simulating how users might interact with a product and spotting usability issues ahead of time. Instead of waiting for problems to emerge in expensive real-world tests, AI offers predictive feedback that helps refine designs earlier in the process.
The Evolving Role of Empathy
One area AI cannot replace is empathy. It can simulate patterns of user behavior, but it cannot truly understand human emotion, context, or cultural nuance. The future of Design Thinking in the age of AI will rely on humans doubling down on empathy and ethics, while AI handles scale, speed, and iteration.
This balance is critical. Without it, we risk building efficient but soulless products. With it, we create experiences that are not only faster to design but also deeper in impact.
Beyond Tools: New Challenges and Responsibilities
While AI supercharges Design Thinking, it also introduces new challenges:
-
Bias in AI Models: If the data is biased, the design suggestions will be biased too. Human oversight is essential.
-
Ethical Design: Who takes responsibility if an AI-generated idea leads to harm? Designers must act as ethical curators.
-
Skill Shifts: Tomorrow’s designer will need to be part strategist, part prompt engineer, and part ethicist.
From Designers to Co-Creators
The future of Design Thinking isn’t about automating creativity but it’s about augmenting it. AI will take over repetitive tasks like rapid prototyping, data synthesis, and endless brainstorming. Designers, in turn, will have more space to do what only humans can: empathize, imagine, and shape products around real human needs.
The designer of tomorrow won’t just be a creator but they will be a co-creator alongside AI. They will guide machines with empathy, filter outputs with ethics, and ensure that innovation is not just faster, but also fairer and more human.
- Brown, Tim. Change by Design: How Design Thinking Creates New Alternatives for Business and Society. Harper Business, 2009.
- IDEO. Design Thinking Process Overview. Retrieved from https://designthinking.ideo.com/
- Uizard. AI-Powered UI Design Platform. Retrieved from https://uizard.io/
- Relume AI. Design Faster with AI-Powered Components. Retrieved from https://relume.io/
- L’Oréal Group. AI and Beauty Tech Innovation Reports. Retrieved from https://www.loreal.com/
- Norman, Don. The Design of Everyday Things. MIT Press, 2013.
- Nielsen Norman Group. The Future of UX and AI-Driven Design. Retrieved from https://www.nngroup.com/
Tuesday, 19 August 2025
Digital Literacy Revamp in Schools: Preparing Kids for an AI-First World
StandardNot long ago, digital literacy in schools meant teaching kids how to type, create a slideshow, or maybe check facts online. Fast forward to today, and the digital world has grown into something far more complex. With AI, deepfakes, online manipulation, and the constant flood of social media content, the stakes are higher than ever.
This is why schools are beginning to revamp digital literacy curriculums not just to teach kids how to use technology, but how to navigate it responsibly and critically.
Why a Revamp is Needed ?
Think about it: the average child today grows up surrounded by smartphones, voice assistants, YouTube, Instagram and TikTok. By the time they’re in middle school, many have already encountered misinformation, parasocial relationships with influencers, and maybe even AI-generated content they didn’t realize wasn’t real.
Traditional digital literacy programs—focused on safe browsing or avoiding obvious scams—aren’t enough anymore. Kids need tools to:
- Spot a deepfake video.
- Understand how recommendation algorithms shape their worldview.
- Recognize when a chatbot isn’t human.
- Balance screen time with mental health.
In short, digital literacy is no longer optional—it’s survival.
The New Focus Areas in Digital Literacy
The revamped curriculums are more interactive, practical, and grounded in real-world scenarios. Here are some of the new priorities:
1. AI Awareness
Students are being introduced to AI—not just how to use it, but how it works behind the scenes. This includes recognizing AI-generated content, understanding its limitations, and asking critical questions like: Who made this model? What data trained it? Could it be biased?
2. Deepfake Detection
Kids are taught how to analyze images and videos for signs of manipulation. They learn to look beyond surface appearances and verify information with trusted sources.
3. Parasocial Relationships
An often-overlooked area: helping students understand the emotional bonds they may feel with influencers or digital personalities. The goal is to teach healthy boundaries between digital content and real-world relationships.
4. Online Empathy and Responsibility
Digital literacy isn’t only defensive—it’s about being responsible creators too. Kids learn about respectful online communication, the impact of cyberbullying, and why their digital footprint matters.
5. Family and Community Involvement
Some revamped programs now include parent modules—so families can talk about AI, misinformation, and online safety together. Digital literacy isn’t just a school subject; it’s a home conversation too.
The Bigger Picture: Shaping Future Citizens
A digitally literate generation is about more than safer internet use; it’s about building critical thinkers who can thrive in an AI-first society. Imagine students who can:
- Question the intent behind a viral post.
- Protect their privacy and demand transparency from platforms.
- Harness AI responsibly for learning, creativity, and problem-solving.
These aren’t just nice-to-have skills, they’re the foundation of tomorrow’s workforce and democracy.
Beyond Teaching, Toward Empowering
The digital literacy revamp is a recognition that technology isn’t just a tool anymore, it’s the environment kids are growing up in. Just as we once taught reading and math as essentials, we must now teach AI awareness, digital ethics, and online resilience as core life skills.
Schools can’t do it alone. Parents, policymakers, and tech companies must share responsibility. But by starting in the classroom, we’re giving the next generation not just the ability to use technology but the wisdom to question it, challenge it, and use it to build a better future.
Bibliography
- Common Sense Media. Digital Citizenship & Literacy Curriculum 2025. Retrieved from https://www.commonsense.org/education/
- UNESCO. Guidelines for Artificial Intelligence in Education. Retrieved from https://unesdoc.unesco.org/
- OECD. Children in the Digital Environment: Policies for Well-Being. Retrieved from https://www.oecd.org/education/
- U.S. Department of Education. Digital Literacy Framework for K-12 Education. Retrieved from https://tech.ed.gov/
- Brookings Institution. Deepfakes and the New Era of Misinformation. Retrieved from https://www.brookings.edu/
Sunday, 17 August 2025
The Future of AI Ethics: Balancing Innovation and Privacy
StandardWhat does it mean to balance innovation and privacy?
As a full-stack developer, I see this tension every day. We design systems to be functional, fast, and intuitive. But behind that sleek interface lies a deeper challenge: the data that fuels AI, where it comes from, and how responsibly it is handled.
AI’s hunger for data is insatiable. The more data a model consumes, the smarter it becomes. But what happens when that data includes our most personal information, our medical records, search history, or even biometric details? How do we protect our digital footprint from being used in ways we never intended?
The Privacy Problem
The current state of AI and privacy is a delicate dance—one that often leans in favor of the algorithms rather than individuals. AI systems, particularly large language models (LLMs) and predictive analytics, are trained on vast datasets scraped from the internet. This creates several risks:
- Data Memorization and Exposure: Models can inadvertently memorize and regurgitate sensitive information, such as personal emails or addresses. This risk is amplified in healthcare and finance, where confidentiality is paramount.
- Algorithmic Bias: AI reflects the data it’s trained on. When datasets are biased, outcomes are biased too. We've seen facial recognition systems misidentify people of color, and hiring algorithms discriminate against women. This isn’t just about privacy—it’s about fairness and social justice.
- Lack of Consent: Many datasets are built without explicit consent from the individuals whose data is used. This raises pressing legal and ethical questions about ownership, autonomy, and digital rights.
These aren’t abstract issues. They translate into wrongful arrests, unfair financial profiling, and systemic discrimination. The need for stronger ethical and regulatory frameworks has never been clearer.
A Path Forward: Building Responsible AI
Balancing AI’s potential with the imperative of privacy demands a multi-pronged approach that blends technology, policy, and culture.
1. Privacy-Enhancing Technologies (PETs)
- Federated Learning: Train models across decentralized devices so raw data never leaves its source.
- Differential Privacy: Introduce noise into datasets to protect individual identities while still enabling useful analysis.
- Encryption Everywhere: Secure data both in transit and at rest to reduce exposure risk.
2. Ethical Frameworks and Regulation
- Transparency: Make AI systems explainable. Users deserve to know not just what a model decides, but why.
- Accountability: Clearly define responsibility when AI systems cause harm—whether it falls on developers, deployers, or regulators.
- Data Minimization: Only collect what is necessary for a defined purpose—no more, no less.
3. Building a Culture of Responsibility
- Diverse Teams: Encourage inclusivity in development teams to detect and address bias early.
- Ethical Audits: Regular, independent evaluations to check for bias, privacy leaks, and misuse.
- User Control: Empower users with more granular control over their data and how it’s used in AI systems.
Public LLMs and the Privacy Challenge
Public Large Language Models (LLMs) bring extraordinary opportunities—and extraordinary risks. Their data sources are broad and often unfiltered, making privacy protection a pressing challenge.
Key Measures for LLMs:
- Data Minimization and Anonymization: Actively filter out sensitive data (PII) during training. Apply anonymization techniques to make re-identification impossible. Offer opt-out mechanisms so individuals can exclude their data from training sets.
- Technical Safeguards (PETs): Use federated learning to keep raw data decentralized. Apply differential privacy to prevent data leakage. Ensure input validation so users can’t accidentally inject sensitive data into prompts.
- Transparent Governance: Publish transparency reports explaining what data is collected and how it’s used. Conduct independent audits to detect bias, leaks, or harmful outputs. Provide clear privacy policies written in plain language, not legal jargon.
- Regulatory & Policy Actions: Introduce AI-specific legislation covering data scraping, liability, and a digital “right to be forgotten.” Promote international cooperation for consistent global standards.
How Companies Collect Data for AI and LLM Training
The power of AI comes from the enormous datasets used to train it. But behind this lies a complex ecosystem of data collection methods, some transparent, others controversial.
Web Scraping and Public Data Harvesting: Most LLMs are trained on publicly available internet data like blogs, articles, forums, and social media posts. Automated crawlers “scrape” this content to build massive datasets. While legal in many contexts, ethical questions remain: did the original authors consent to their work being used in this way?Example: GitHub repositories were scraped to train coding AIs, sparking lawsuits from developers who argued their work was used without consent or attribution.
Third-Party Data Brokers: Some companies purchase vast datasets from brokers that aggregate browsing history, purchase patterns, and demographic data. While usually anonymized, the risk of re-identification remains high.
Consumer Products and IoT Devices: Smart speakers, wearables, and connected home devices capture biometric and behavioral data from sleep cycles to location tracking—often used to train AI in health and lifestyle domains.
Human Feedback Loops (RLHF): Reinforcement Learning with Human Feedback involves users rating or correcting AI responses. These interactions are aggregated to fine-tune models like GPT.
Shadow Data Collection: Less visible forms of data collection include keystroke logging, metadata tracking, and behavioral monitoring. Even anonymized, this data can reveal sensitive patterns about individuals.
Emerging Alternatives: Ethical Data Practices
To counter these concerns, companies and researchers are experimenting with safer, more responsible methods:
- Synthetic Data: Artificially generated datasets that simulate real-world patterns without exposing actual personal details.
- Federated Learning: Keeping raw data on user devices and aggregating only learned patterns.
- User Compensation Models: Exploring ways to reward or pay users whose data contributes to AI training.
Innovation with Integrity
The future of AI isn’t just about building smarter machines, it’s about building systems society can trust. Innovation cannot come at the expense of privacy, fairness, or autonomy.
By embedding privacy-enhancing technologies, enforcing ethical frameworks, and fostering a culture of responsibility, we can strike the right balance.
AI has the power to revolutionize our world but only if it serves humanity, not the other way around. The real question isn’t how fast AI can advance, but how responsibly we choose to guide it.
Bibliography
- Floridi, L. & Cowls, J. (2022). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.
- European Union. (2018). General Data Protection Regulation (GDPR). Retrieved from https://gdpr-info.eu
- Brundage, M. et al. (2023). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. Partnership on AI.
- Cybersecurity & Infrastructure Security Agency (CISA). Privacy and AI Security Practices. Retrieved from https://www.cisa.gov
- IBM Security. (2024). Cost of a Data Breach Report. Retrieved from https://www.ibm.com/reports/data-breach
- OpenAI. (2023). Our Approach to Alignment Research. Retrieved from https://openai.com/research











