Good processes don't test everything; they take a balanced approach. This is often counterintuitive, as it’s easy to assume that more is always better.
It is very common to want to execute every test, as well as every combination of browsers, or even every variant of a page. This will fast become impractical or even unnecessary once you have a large application.
If you do not have a fixed list of devices you support, you may use common tooling such as Google Analytics to aid you. From this, you should end up with a list of the most devices or browsers used by your customers. Focus on the top 95% of these, or 99% if you have a higher budget. Having an official policy on which browsers you support can also help you here.
Make sure you revisit this at least once per quarter, as the needed coverage may change.
Tip: Weight the priority of your bugs by browser popularity. Likewise, bugs on popular devices should be fixed first.
Which areas of your product should you test? The two most practical ways to decide are by looking at usage data or bugs.
Using tooling such as Amplitude, Mixpanel or similar, work out which areas of your product are most active. These are often managed by a product team, who tracks feature usage. By looking at the most common paths, you may focus your testing efforts there first. If your tooling supports it, look at common flows through your application.
Tip: It's uncommon, but good practice, to run these tools in QA as well. You can use this to check that things are actually covering what's expected.
Developers often use error reporting software. This is a great source to use to focus testing efforts inside QA. Tracking defects by area of the product, developer, team and source of spec is a good start. This will often allow you to discover patterns. Any patterns found can guide process improvements and retrospectives with developers.