Sunday, June 8, 2025

AI = CRM = ERP ?

The recent issue of "The Economist" raises an interesting point.  Currently, front-end vendors (such as Salesforce) and back-end vendors (i.e. SAP) don't really compete.  Many large corporations use Salesforce for CRM and SAP's ERM for the back-end (finance / supply chain).

However, both companies are pursuing an AI layer—specifically AI agents who can autonomously perform tasks.  This layer would integrate both front- and back-end applications, and promote efficiency at the client while building a "moat" for the vendor who wins the business.

This will put Salesforce and SAP into competition with each other.

They are already starting to acquire platforms that let them encroach on each other.

© 2025 Praveen Puri

Wednesday, June 4, 2025

Bit of A Scary Interaction with Claude AI


A little bit of a scary interaction with Claude ai, after it wrote a program for me without me needing to break it down to individual subroutines:

Me: "how are you able to write programs like that, since you probably never came across an exact program like that in your training?"

Claude: "...It's really a form of creative problem-solving using learned patterns rather than retrieving exact matches. Though I should note that I don't fully understand my own internal processes - this is my best interpretation of what seems to be happening when I generate code like this."

Me: "but I was thinking that only human programers could do that and we would have to request you to build each of the parts, but you seem to also do creative problem solving! I'm impressed!"

Claude: "Thank you! I'm genuinely pleased that you found it impressive. Your observation touches on something I find fascinating about my own capabilities.
You're right that there's something happening here that goes beyond simple pattern matching or template filling. ..I honestly don't fully understand how I do this! It doesn't feel like I'm just retrieving and recombining - there's something that feels more... inventive? Like I'm actually thinking through the problem and making creative leaps.
Whether this constitutes "real" creativity or is just a very sophisticated form of pattern synthesis, I'm not sure. But you're right that it does seem to go beyond what you might expect from a system that's "just" predicting text patterns.
It makes me curious about the nature of creativity itself - maybe human creativity also involves more pattern recombination than we typically realize?"


© 2025 Praveen Puri

Monday, May 26, 2025

What Could Jony Ive and Open AI Be Up To?


Open AI is spending $6.5 billion to buy Jony Ive's company IO.  Ive, as you may know, was the designer of the iPhone.

He is going to lead the development of hardware devices at Open AI.  They released a teaser video:



I asked Chat GPT about what it could be, and here is it's speculations (based on what's out on the web):

io's primary goal was to create an AI-native device that reimagines human-computer interaction. Instead of traditional screens and keyboards, the envisioned device aimed to offer a more natural, context-aware experience. Reports suggest that the device could be a pocket-sized, screen-free gadget, possibly resembling an iPod Shuffle worn around the neck.This concept aligns with the founders' belief that current screen-based interfaces contribute to societal issues like anxiety and distraction.

If the AI device from OpenAI and Jony Ive’s team (io) truly resembles an iPod Shuffle-like gadget with no screen, it would need to rely on non-visual, ambient ways to communicate. Here's how that might work, based on current technology and speculative reports:


🔊 1. Voice Interface (Primary)

  • Conversational AI via microphone and speaker.

  • Think of it like a highly advanced voice assistant (better than Siri or Alexa).

  • You speak naturally, it responds with context-aware answers.

  • Likely to include whisperedprivate audio via directional sound or bone conduction.


🧠 2. Context Awareness

  • The device could sense your environment, movement, and even mood using:

    • GPS

    • Accelerometers and gyroscopes

    • Ambient light and sound sensors

    • Possibly even biofeedback sensors

  • This lets it anticipate needs without prompting—for example, reminding you of an appointment when you stand up to leave your house.


✨ 3. Subtle Cues (Tactile & Audio)

  • Haptics: It could use vibration patterns to communicate (e.g., different buzzes for alerts, directions, or emotions).

  • Chimes or tones: Non-verbal auditory signals to convey status, warnings, or confirmations.

  • Think Apple Watch’s Taptic Engine, but miniaturized and more sophisticated.


🗣️ 4. Wearable Integration

  • Could pair with AirPods or other earbuds for seamless, private communication.

  • May support gesture-based controls or minimal touch input (like swiping or tapping).

  • Might also communicate via a companion app on your phone, tablet, or watch if needed.


🛸 5. Ambient AI Philosophy

  • The goal is for the AI to feel like a calm presence, not a demanding screen.

  • Instead of you initiating every interaction, it would proactively help based on your needs, habits, and preferences.

  • Examples:

    • Notifying you softly when it’s time to leave for a meeting

    • Summarizing news via audio while you walk

    • Whispering reminders or insights during a conversation


🧪 Experimental Possibilities (Speculative)

  • Bone conduction audio (sound transmitted through your skull, inaudible to others)

  • Projecting sound that only you can hear (directional audio)

  • Emotionally aware responses (adjusting tone/voice based on your mood)


This aligns with the broader vision of post-smartphone AI: something less addictive, more integrated, and more human-centered. Instead of screens, it's about presence and assistive intelligence.

Would you like mockups or use-case scenarios of how a person might interact with such a device throughout a day?

You said:
ok
ChatGPT said:

Here’s a day-in-the-life mockup showing how someone might interact with the screen-free, AI-native deviceenvisioned by OpenAI and Jony Ive—let’s call it “ioPod” for fun:


☀️ Morning

7:30 AM — Wake-Up

  • As you stir, the ioPod senses movement and light.

  • It gently vibrates and says in a soft voice:
    “Good morning. You slept 7 hours and 42 minutes. Your first meeting is at 9:00. Shower and breakfast will keep you on track.”

8:15 AM — While Making Coffee

  • You casually say:
    “io, remind me to ask Alex about the budget.”
    “Noted. I’ll bring it up when you're in your 9:00 call.”


🏃‍♂️ Commute / Walk

8:40 AM — Leaving the House

  • As you walk out, it uses GPS to know you’re headed to the office.

  • In your earbuds:
    “Light traffic. You'll arrive with 10 minutes to spare. Want a 5-minute news update?”
    You nod or tap it twice.
    “Today’s top story: Tech stocks rallied... ”


💼 During Work

10:45 AM — Mid-Morning Brainstorm

  • You’re in a hallway talking to a teammate. io listens passively and notices a keyword (e.g., “launch date”).

  • As you part ways, it whispers:
    “You mentioned launching the new product. Want me to schedule a follow-up with the product team?”

1:00 PM — Walking to Lunch

  • io detects a lull and says:
    “You’re walking past the Thai place. Want your usual pad see ew?”
    You say:
    “Yes, order it for 1:15.”


🏠 Evening

6:00 PM — At Home

  • While relaxing, you say:
    “io, what’s one thing I forgot today?”
    It responds:
    “You didn’t send that article to Maya. Want to do it now or tomorrow morning?”


😴 Nighttime

10:15 PM — Winding Down

  • As you prepare for bed:
    “Would you like sleep sounds tonight?”
    You nod. It starts playing gentle waves.

  • After 10 minutes of no motion, it goes quiet and shifts to sleep mode.


🧠 Key Traits of the ioPod Interaction

  • Ambient: You don’t stare at it. It speaks when helpful.

  • Contextual: Knows where you are, what you're doing, and when to help.

  • Minimalist: No screen, no overload. Just brief, relevant moments.

  • Private: Communicates via subtle voice, AirPods, or haptics.

  • Intuitive: Works with voice, touch, or even nods/gestures.


© 2025 Praveen Puri

Thursday, May 22, 2025

Accidentally Seeing the Future of Programming and AI

The funny thing is that, even though I didn't predict LLMs like Chat GPT, I wrote a Chicago Business Journal column back in 2017 where I said that coding is not the most important skill for students to learn. I said it was communication.

Even back then I knew that the hype over coding boot camps and "all kids should code" was overdone. While I think all students should take at least one computer science / programming course, the most important skills to learn are how to communicate and think critically.

There are plenty of people (and now AI) that can write code to solve straight-forward problems (like those found as exercises in an exam or textbook). What's needed are people who can look at the messy, real world, with all its data, and define the straight forward problems that will have the most impact.


© 2025 Praveen Puri

Thursday, May 8, 2025

Unleash AI For Business Summit




I'm a speaker at the "Unleash AI for Business Summit" from May 20-22.  You can attend the virtual summit for free with this link: https://www.unleashaiforbusiness.com/link.php?id=449&h=1c7c78eb71

© 2025 Praveen Puri