Sony A9III & FE 300 F2.8 GM Announcement Discussion

The camera doesn't learn, it's really not even AI. It's just heavily programmed to recognize what Sony's engineers told it to. It's really no different than Animal/bird/human face etc. It recognizes the human form but in many different poses, if the form has its back to you, from a distance, etc. I think the term has been dropped because they got called out on it too many times.

Of course, one does wonder what the camera would do if something like a train had it's back to you... :unsure:
You mean to tell me they have not told the camera what the back of a bus looks like or a train? I guess that makes Sony more stupid than the camera ;)
 
I wonder if you are using eye AF and a cat walks by with its tail up it tracks....
 
The reason why I think it will get the prioritization of Human/Animal/Bird and not all the other recognitions is because the A7Rv and A9iii use a chipset that the A1 doesn't have to do that work. Moving that work will take a significant amount of development and testing none of which would be used on future camera models. The deeper into development the A1ii is the less likely additional subject recognition will be added to the A1. The prioritization however requires less work as the identification is already being done it just needs to say which order to run the calculations in.

Oh, I thought you were talking about the A1 mark II, not the firmware for the A1 mark I.
 
Absolutely. I walk though the entire menu as one of the first things I do with each new camera, but things are getting so complex now that even if I've seen the menu item, I don't always understand its intended use, let alone the subtleties.
Me too. And I'll read at least one of the books including having a paper copy of one (Busch). And watch some youtube.

I'm on my third Sony camera. The books get very repetitive. But sometimes I even find things that I hadn't realised that my previous camera did already!
One thing I have noticed, is that Sony have made no further mention of the "deep learning"

I think two things with the "deep learning" 1) the term just died, nobody really uses that anymore 2) for the camera to actually learn it would need the teaching to come from someplace, which means firmware updates and we all know how those have been.

Or it would have to learn from the photos one takes. That would be neat and it's probably on the way, one day. That would be more like actually-AI, which we don't have yet. Like "real-time" eye-AF should have been called "All the time" AF, or "One Less Button" AF. The actual words chosen are just marketing speak.

Anway, the "AI" AF, with multiple object choice, is available in the a6700 now. I can't see a new A1 (or any higher/newer model) not having it. Maybe a new A1 would have another one-less-button feature, and would have auto subject recognition.
 
I can not believe people are still worrying about the cameras learning when it is actually the people behind them that need to learn, most cameras are way smarter than most people behind them, my self included ;)
 
Me too. And I'll read at least one of the books including having a paper copy of one (Busch). And watch some youtube.

I'm on my third Sony camera. The books get very repetitive. But sometimes I even find things that I hadn't realised that my previous camera did already!




Or it would have to learn from the photos one takes. That would be neat and it's probably on the way, one day. That would be more like actually-AI, which we don't have yet. Like "real-time" eye-AF should have been called "All the time" AF, or "One Less Button" AF. The actual words chosen are just marketing speak.

Anway, the "AI" AF, with multiple object choice, is available in the a6700 now. I can't see a new A1 (or any higher/newer model) not having it. Maybe a new A1 would have another one-less-button feature, and would have auto subject recognition.
One catch with "auto subject recognition" would be mixed subjects in the frame - you are shooting showjumping or horse racing: do you want to focus on the human or the horse? (I can see arguments for both) Or photographing cricket when there are humans and birds on the ground. We already see the animal/bird (the only auto recognition mode we have today) being enhanced in the A9 III to have priority for animals or birds.

I don't mind the idea of auto, but I'd like to keep the ability to say "I just want birds".
 
Or it would have to learn from the photos one takes. That would be neat and it's probably on the way, one day.
If they did do something like this it would still need to go through some kind of firmware update as most AI tools need 10s of millions of data points if not 100s for identifications like these. It would be interesting to see how many people would do this consistently.
I can not believe people are still worrying about the cameras learning when it is actually the people behind them that need to learn, most cameras are way smarter than most people behind them, my self included ;)
I cannot believe that you cannot believe people want more and more things done for them without needing to learn or do anything.
 
If they did do something like this it would still need to go through some kind of firmware update as most AI tools need 10s of millions of data points if not 100s for identifications like these. It would be interesting to see how many people would do this consistently.

I cannot believe that you cannot believe people want more and more things done for them without needing to learn or do anything.
It is becoming crazy, what part do you want to play in the shot why not use your phone, you are clearly spending so much time talking about it and reading up on you could actually improve by using what you have
 
I'll just send the camera, while I sit at home and read the book!

But do we mind the cameras making it easier for us? There's always the option to turn all that off.

All this autofocus stuff is wonderful. I was asking around about the upgrade to a7iv from a6500: is it really that much better? Someone told me it's so good they sometimes feel like a bit of a fraud because the camera has done so much of the work.

It's true. But does that mean my images are all perfect, wonderful? Far from it! Not many fail on focus, but lots do from other stuff ranging from shere composition to actually getting the best out of the camera's metering. I have heaps to learn.
 
If they did do something like this it would still need to go through some kind of firmware update as most AI tools need 10s of millions of data points if not 100s for identifications like these. It would be interesting to see how many people would do this consistently.
Something like car auto transmissions learning the way we drive?
 
Back
Top