Like most folks who started programming in the 70's and 80's, I began on mainframes. It was years before I programmed a desktop computer.
And while I strive to avoid damn-kids-get-off-my-lawn graybeard ramblings, I'll sometimes get this question from a brash young whippersnapper trying to understand what pre-PC era programming was like:
Back in your first programming job, what kind of computer was on your desk?
In my first programming job, I didn't have a computer on my desk.
My desk - like those of my co-workers - was device free. The nearest we had to a personal computing device was an LCD calculator.
Not only didn't I have a computer on my desk, there wasn't one in the entire office, or even in the same building. We were programming a mainframe that was about five miles along the road.
Instead, imagine a stack of these on my desk:
This is an IBM coding sheet, for COBOL. We had them for other languages, too: Pascal, Assembly language, Fortran.
You'd use these to hand-write your computer programs. In pencil.
Because this was the dizzying heights of "undo" technology:
And when you were finished handwriting a section of code - perhaps a full program, perhaps a subroutine - you'd gather these sheets together (carefully numbered in sequence, of course) and send them along to the folks in the data entry department.
They'd type it in.
And the next day you'd get a report to find out if it compiled or not.
Let me say that again: the next day you could find out if your code compiled or not.
If you'd made even a simple typo - say, a missing period, or something that looked more like a colon than a semicolon - it'd take at least another 24 hours to get a fix in and turn it around.
This method, as you might imagine, requires a somewhat higher level of attentiveness to writing code than the way most of us work now.
I wouldn't want to work like this again, but I'm glad I have worked like this.
And though it may seem primitive now, several of the programmers I worked with at the time viewed that state of affairs as programmer-friendly, if indeed not simplistic.
They'd been used to working with punch cards, something I only ever did a handful of times.
I'll admit that even at the time (early 80's) this was a little - though not much - behind the times. I was working for the British Government, not some new-fangled highly-funded tech company. You use what you can get.
A year or two into this, we were finally upgraded. Amidst much rejoicing, we were provided a handful of IBM 3270 dumb terminals - monochrome devices, with the classic IBM Model M clicky keyboard.
Imagine the sensation - type your own code in? Compile a program yourself? How marvelous! What an age we lived in!
Another common question:
Should I learn C and/or C++ to be an Objective-C developer?
If you're asking, and therefore have any amount of resistance to the idea, then: no.
But this is a long-winded "it depends" answer. There are some developers that very publicly proclaim no-one should touch Objective-C without being a C expert first, but that is simply and demonstrably untrue: I know many successful, knowledgable iOS devs with multiple apps in the store who couldn't write a basic C malloc statement if their lives depended on it.
However, some of this is personality-driven - some folks enjoy learning the underlying tech, and it's true that if you're good with C, you'll enjoy lightbulb-going-off understanding of why-things-are-the-way-they-are in the Apple frameworks.
So - is knowledge of C useful? Sure.
Is it necessary? No, I don't believe so.
There's enough already to learn in the world of iOS dev, so I suggest that most people, most of the time, be as ruthless as they can in dumping anything that's not absolutely necessary.
If your primary goal is "Write an iOS app and get it in the store", then C is a nice-to-have-but-not-essential skill.
If you know C already, great - but if you don't know it, spending weeks (or months) wrapping your head around C concepts and techniques that you will never (or at least, very, very rarely) use in Objective-C isn't the best use of your time.
However, if your goal is not just the pragmatic "write and sell an app", but also "have a deep understanding of what's going on under the hood", then sure, C is a stunningly useful language to know.
(Well, I pause while I'd make an argument for Assembly Language as being the true hardcore language to give you a real appreciation of what's actually going on on the chip... but try C first and see how you like it.)
What about C++?
Okay - there's value in knowing C even if you're not going to write plain C programs, but C++ is a language you should learn if you want to actually write C++ programs. If you don't want to write C++ code, don't learn it.
Full disclosure - I'll admit C++ is not my favorite language - I spent a long time writing it, and about 10 years ago had a consulting gig where I was brought in to spend 6 months purely on fixing other people's incompetently-written C++ and that was like aversion therapy; I shudder to look at it now. so I don't mind if you like it, but I'd be a happy man if I never wrote a line of it ever again.
Hey Simon, what's your favorite programming language?
I don't have one.
No, really - what is it?
Seriously. You may ask well as what my favorite brand of gasoline is. I understand that many people have a strong preference - I'm not one of them.
But you do a lot of iOS stuff, so your favorite language must be Objective-C, right?
God, no. Objective-C certainly has its quirky charms, but it's not even in my top five list.
If I had a top five list. Which I don't.
Ah! But that suggests you have preferences, at least?
Yes, you've cunningly caught me in my foul web of lies. There are some languages I prefer: I'll gravitate to C-based languages when given the choice. But that's due to long familiarity and personal speed, not any pretense of objective analysis and/or artistic judgement.
If you're a pragmatic, in-the-trenches software developer (as opposed to, say, a research-focused computer scientist, who has different goals) the programming language is - and should always be - a secondary decision.
First question: what are you trying to do?
You want to build iOS apps? Learn Objective-C. End of story.
A strong passion for either language is unnecessary - moreover, there's a significant benefit to eyes-wide-open awareness of the pain points and quirks of the language you're currently working in.
Sure, you don't want to hate the language, but you don't need to love it.
I disagree! Objective-C is a fantastic, totally coherent language! You just don't understand it properly!
Uh-huh. Come back with an explanation for why this fantastic, totally coherent language needs three different terms for the concept of null without sounding like you have a case of Programmer Stockholm Syndrome. ("But my kidnapper loves me! You just don't understand him the way I do!")
Make no mistake: I enjoy writing Objective-C. And Apple have made some incredible improvements to it over the years. ARC is the most admirable method of memory management I've ever encountered, and I continue to be impressed by how well it works. But let's face it: Objective-C is a language that makes everyone learning it say "Wait... what?" a dozen times in the first three months.
So neither of these languages are "favorites", but they're the languages I've used most in recent years, simply because they're the right ones to build what I've wanted to build.
What about other languages?
I like writing Python, though I'm still in a lovey-dovey, rose-tinted-spectacles honeymoon period with it, because I only use it for recreational code. I haven't had to write enough battle-tested production code to experience its foibles and issues, which I'm positive it has.
The same goes for functional languages: I'd had fun in Haskell and Erlang, but I've yet to ship mission-critical software in either language, and without that I'm not going to make a personal judgement on their real-world pros and cons.
And I'm very fond of C#, which I used heavily since its early days. Though the shine did wear off as more and more "features" were added to the language and surrounding frameworks.
When I first used it, C# and .NET development had a good amount of simplicity and clarity. Now, multiple competing paradigms make it harder, particularly for beginners. (Should I use MVC? Or MVVM? Or MVP? Webforms? Single-page? etc., etc.)
Choice is not always a benefit. I like constraints. They make it easier to just get to work.
On that front, I'll certainly tip my hat to how Apple have worked Objective-C. If you look at what's been added to that world over recent years, it's all tended toward making it simpler. ARC, literals, even the default code provided in a new Xcode project or class template - they've all gotten more straightforward and simpler over the years, without muddying the waters with ten competing design paradigms to choose from.
I suppose I'll always have a soft spot for both Assembly Language and COBOL (yes, really) as they were the focus of my first years of professional programming. I'm sure even when I'm in my dotage and unable to remember my own name, I'll be able to write an ENVIRONMENT DIVISION block or MOV instruction without batting a wrinkly eyelid.
What about Java?
Java is a perfectly acceptable language. It's white bread; it stirs no strong emotion in me one way or the other.
Any languages you don't like?
That's easier. C++.
Okay: technically, I don't dislike the language itself - I just get an immediate sinking feeling from merely looking at any large amount of it. But understand I once took a six month contract just fixing a bunch of bug-ridden C++ written by a dozen developers who'd been sent on one C++ course and had no idea what they were doing - it was like aversion therapy for that language.
Well-written C++, a joy.
Badly written C++, a complete soul-sucking nightmare, in a way that bad code in any other language never quite seems able to match.
So which language should I learn?
Now you're just yanking my chain.