PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 23rd of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[07:07:40] <Ned_> anyone know why MMS might be hanging on "AdjustUsers" ?
[07:07:57] <joannac> Check the automation agent logs
[07:08:02] <Ned_> I did
[07:08:07] <Ned_> there's a _lot_ of stuff in there :-9
[07:08:13] <Ned_> oh, wait
[07:08:15] <Ned_> it just fixed itself
[07:08:18] <Ned_> okay, false alarm :D
[07:08:34] <Ned_> sorry
[09:31:34] <aps> I did db.fsyncLock( ) at secondary member (version 2.6), took EBS snapshot of data and journal drive, used this snapshot to create new volume mounted at a new mongo (3.0.4) and added this new mongo to the replica set.
[09:32:01] <aps> I was hoping that it would catch up with the replication fast, but it didn't.
[09:32:36] <aps> It started replication from the beginning. I see data and journal file are present on the new disk. What am I missing here?
[09:33:45] <aps> New member is in state "STARTUP2" and is not syncing at all.
[10:21:22] <crashev> hello, does mongodb has sill problems with lossing randomly data on writes or is it just a sound of past ?
[10:21:57] <Zelest> used mongodb in production for over 3 years now.. never happened a single time so far..
[10:28:56] <pamp> Someone knows why I'm getting this exception :exception: socket exception [CONNECT_ERROR] for shard0000:27017
[10:30:15] <pamp> the connection between shard is ok, I can connect to second shard from the first, contrariwise too
[10:30:28] <pamp> Im using .Net driver
[10:30:38] <pamp> mongo version 3.0.4
[10:30:43] <pamp> Ubuntu servers
[10:30:54] <pamp> What can produce this error?
[10:32:54] <crashev> Zelest: wonder when all those articles and issues come from then, about inconsistencies and losing data
[10:33:32] <crashev> Zelest: just read this one http://cryto.net/~joepie91/blog/2015/07/19/why-you-should-never-ever-ever-use-mongodb/
[10:33:45] <Zelest> yeah, someone linked it the other day here as well
[10:33:56] <Zelest> and most of it covers 2.4.x
[10:33:59] <Zelest> which is *years* ago
[11:49:50] <crised> if the data from my db is public to read... When building a website... Is it possible/recommended to go directly from front end / browser to the database?
[11:50:00] <crised> Without passing thorugh a business logic server?
[11:52:18] <crised> or maybe should I use an HTTP interface?
[12:00:00] <lmatteis> guys say i have a collection contains lots of logs for web requests
[12:00:08] <lmatteis> and i want to display all the logs separated by domain (domain per row) along with the percentage uptime i recorded (which is essentially just calculated for each request for that row)
[12:00:27] <lmatteis> would an aggregate be performant to generate such thing dynamically (each time a user accesses the page)? or do i need to run a pre-aggregate offline instead?
[12:01:17] <coudenysj> lmatteis: the query will be the same, some you could try to do it on the fly, if it is too slow, offload it
[12:01:31] <coudenysj> *so you could
[12:02:03] <lmatteis> coudenysj: by offload it you mean that i could run the query, say with a cron job, and then update a separate index with the results and serve that on the site?
[12:02:30] <lmatteis> by "separate index" i meant "separate collection"
[12:02:34] <coudenysj> something like that, but the aggregation query will be the same on both cases
[12:02:43] <coudenysj> so you could start by trying it live
[12:03:04] <lmatteis> coudenysj: plus aren't indexes on the collection already building such view themeselves ?
[12:04:09] <coudenysj> that depends on the queries and on your indexes :)
[12:04:40] <lmatteis> consider that each row contains around 10 documents that i need to calculate
[12:05:31] <lmatteis> just a find().count() takes some 3-4 seconds
[12:05:31] <lmatteis> 10 thousand, not 10 :)
[12:06:21] <coudenysj> .explain() is your new best friend :)
[12:34:03] <Sky[x]> hi
[12:34:33] <Sky[x]> Is there any way to sort results case insensitive?
[12:38:11] <StephenLynx> https://www.google.com.br/search?client=ubuntu&channel=fs&q=mongodb+sort+case+insensitive&ie=utf-8&oe=utf-8&gfe_rd=cr&ei=ld-wVav7N4aC8QfYnrgY
[12:38:29] <StephenLynx> tl,dr; no
[12:57:35] <Siyfion> Can someone give me a tip for my update query... :/
[12:57:43] <Siyfion> I've got an array of embedded documents,
[12:57:54] <Siyfion> I have a query "{ 'subRecipes.recipe._id': doc._id }" that finds the documents that need updating
[12:58:02] <Siyfion> next I want to pull the matching sub-document out of the array entirely
[12:58:11] <Siyfion> I thought this would work: Recipes.update({ },{ $pull: { 'subRecipes': { 'recipe._id': doc._id } } },{ multi: true });
[12:58:18] <Siyfion> But it doesn't seem too... I thought I might be able to use the "$" operator to pull the index found, but I'm not sure how.
[13:00:18] <jacksnipe> I'm having a problem with the mongodb php driver: Queries constructed using the MongoDate class wind up coming out as "$gt" : {"sec" : ..., "usec" : ...} instead of anything using any actual date types
[13:01:09] <jacksnipe> how should I be querying for date ranges?
[13:01:30] <deathanchor> jacksnipe: checking my code I have some php stuff
[13:01:39] <jacksnipe> thanks deathanchor, appreciate it
[13:02:53] <deathanchor> jacksnipe: throwing up a gist
[13:03:48] <jacksnipe> deathanchor: thanks so much
[13:04:06] <deathanchor> jacksnipe: this is what I have in my code: https://gist.github.com/deathanchor/2b2887520ff4d9e25dfa
[13:04:26] <deathanchor> I didn't write it so I can't help much
[13:04:56] <jacksnipe> well thanks anyway!'
[13:05:07] <jacksnipe> will try this different construction, doesn't look like it's different than what I'm doing tho :(
[13:06:35] <deathanchor> want to comment on that page what you are doing?
[13:07:36] <coudenysj> deathanchor: what seems to be the problem?
[13:07:45] <deathanchor> jacksnipe has the problem
[13:08:06] <jacksnipe> god I have os many
[13:08:08] <jacksnipe> so*
[13:08:16] <jacksnipe> but this problem is that querying date ranges with MongoDate
[13:08:32] <jacksnipe> and deathanchor I'll comment if I figure anything productive out :)
[13:09:03] <[diecast]> are there any mongo tools for seeding content?
[13:09:46] <jacksnipe> arg no luck it does the same thing
[13:10:00] <jacksnipe> [diecast]: what's that?
[13:10:37] <[diecast]> jacksnipe: say I have some collections with documents that I want to insert templated information with
[13:11:02] <[diecast]> I want to drop collections and insert the new documents from templates
[13:11:26] <jacksnipe> sooo is a template just a schema in this case?
[13:17:16] <[diecast]> no
[13:17:39] <[diecast]> it is a document with information like endpoint connections, oid, application associations, etc
[13:18:02] <[diecast]> and that information needs to be filled in with variables
[13:22:03] <jacksnipe> that's certainly provided by a lot of mongodb ORMs, tho I don't think it's provided at the db level
[13:22:14] <jacksnipe> well ODMs
[13:24:28] <[diecast]> google is showing me something called doctrine odm which is a php tool?
[13:28:32] <jacksnipe> honestly it sounds like you could implement that on your own in any OO language pretty trivially
[13:35:59] <[diecast]> that's what i was thinking originally using python
[13:36:27] <[diecast]> but wanted to find out if there were any mongo tools already made
[15:36:00] <nalum> hello all, I'm trying to get the name of a collection from a collection object in python (pymongo 2.8) but it's just printing <common.classes.Datastore.Executable object at 0x7f0e8eb20190> when I try to print it as follows: print self.collection.name
[15:58:44] <_ritchie_> what’s the best mongo library for use with Django?
[16:00:00] <StephenLynx> anything wrong with the driver?
[16:01:30] <_ritchie_> the driver?
[16:01:51] <StephenLynx> yes
[16:01:52] <StephenLynx> the driver.
[16:02:02] <_ritchie_> what’s the driver?
[16:02:10] <StephenLynx> what is the language?
[16:02:21] <_ritchie_> Python
[16:02:39] <StephenLynx> http://docs.mongodb.org/ecosystem/drivers/python/
[16:02:50] <_ritchie_> fyi I don’t typically use python or mongodb, this is a client’s preference
[16:02:55] <_ritchie_> so I’m totally clueless here
[16:03:04] <_ritchie_> I don’t know the ecosystem at all
[16:03:36] <StephenLynx> so I think you might be better with a smaller burden.
[16:03:48] <StephenLynx> more dependencies, more problems.
[16:04:13] <_ritchie_> so the client was using mongoengine and it doesn’t seem like a great library
[16:04:30] <_ritchie_> and the project is just getting started so it’s not too late to port to another lib
[16:04:50] <StephenLynx> GothAlice uses it and praises it, if I remember it right.
[16:04:59] <StephenLynx> I wouldn't use anything on top of the driver, though.
[16:05:13] <_ritchie_> one thing I know I’d like to do is take a dict and save it as a Mongo Document
[16:05:26] <GothAlice> That defeats the entire purpose of having MongoEngine.
[16:05:44] <GothAlice> Thus, your expectations going in are what's causing you to interpret MongoEngine as being sub-par.
[16:06:08] <saml> I got {_id:1, v: [1,2,3]}, {_id:2, v: [2, 3, 4, 4]}, ... how can I aggregate them so that {_id:1, v: sum of $v} ?
[16:06:10] <GothAlice> db.collection.insert(dict(some="dictionary", value=27))
[16:06:14] <_ritchie_> ok that’s fair, MongoEngine seems like it’s purpose is to provide some validation of the Schema?
[16:06:24] <GothAlice> It's an Object Document Mapper.
[16:06:47] <GothAlice> Literally mapping fields in documents (the "dictionaries", which MongoDB calls "documents") to attributes of objects in Python-land.
[16:07:23] <GothAlice> Identical to the difference between raw SQL and a layer like SQLAlchemy (which is an "active record" Object Relational Mapper.)
[16:08:25] <_ritchie_> yea I think MongoEngine is probably too much for this project
[16:08:42] <_ritchie_> for instance there’s no business logic for any of this data
[16:08:42] <GothAlice> _ritchie_: https://github.com/marrow/cache/blob/develop/marrow/cache/model.py#L86-L197 is an example simple model of mine. Note that this performs field renaming (i.e. the attribute "value" is stored as "v" in the underlying document).
[16:08:48] <_ritchie_> it’s just saving a set of configs
[16:08:59] <saml> actually, i got {_id:1,v:[{name:'Foo',score:2}, {name:'Foo', score:3}, {name:'Bar', ..}...]} and want {_id:1, v:[{name:'Foo', score: 5}, ...]}
[16:08:59] <GothAlice> If they're configs, use INI, YAML, or SQLite.
[16:09:13] <saml> need to group by v.name and sum of v.score
[16:09:17] <_ritchie_> the client wanted to use MongoEngine because it validates the schema
[16:09:39] <_ritchie_> GothAlice: there’s like 3 other applications in production that read these configs as Mongodocuments
[16:10:16] <_ritchie_> and it’s a large-ish company so I don’t have the opportunity to make a change like that easily
[16:11:27] <_ritchie_> I’m just making changes to one web application which saves the configs to mongo
[16:11:51] <GothAlice> So my statement in #mongoengine applies: just use PyMongo directly.
[16:12:24] <_ritchie_> thanks GothAlice, I think you’re right
[16:12:56] <StephenLynx> people really should stop adding dependencies like adornments.
[16:13:00] <GothAlice> There are innumerable additional benefits to using an abstraction layer, whatever that layer might be. But I doubt any will convince you. (Signals, integrity, containerization, modularity, testability, …)
[16:13:14] <StephenLynx> think about the dependency in the scenario before thinking they need the dependency.
[16:15:03] <GothAlice> Because I'm using an abstraction layer in marrow.cache, I can literally, with a minimum of work, drop MongoEngine for raw PyMongo, or drop MongoDB itself for any other database layer, without impacting users of my library beyond needing to update some configuration directives. (And potentially deploy a different database, but you get the idea.)
[16:15:25] <cheeser> like java's JPA
[16:15:27] <cheeser> +1
[16:15:45] <GothAlice> MongoDB doesn't conform to Python's standard DB abstraction, alas.
[16:15:50] <GothAlice> Silly narrow focus on relational…
[16:19:12] <_ritchie_> GothAlice: is there a way to try to cast a Dict as a MongoEngine Document? something like for all keys of the dict if the key is a field on the document keep its value otherwise discard
[16:19:31] <_ritchie_> because it would be nice to be able to do that and then call is_valid()
[16:20:30] <GothAlice> _ritchie_: Basic Python syntax says MyDocument(**somedict) will work, if the dictionary keys match field names on the document class.
[16:21:03] <GothAlice> However, at what point do you have a raw dictionary? If using MongoEngine, that shouldn't happen.
[16:21:16] <GothAlice> (You deal with instances of document classes instead.)
[16:21:36] <_ritchie_> GothAlice: it’s data from an HTML form
[16:22:02] <_ritchie_> technically it’s a django QueryDict which is a subclass of Dict
[16:22:46] <_ritchie_> also thanks for your help with the Python syntax, typically I write Javascript or Ruby
[16:25:01] <_ritchie_> I’m just taking user input from an HTML form and saving it to mongo
[16:25:26] <StephenLynx> I assume your software handles this data one way or the other before doing that.
[16:25:57] <StephenLynx> so you won't end up with 'banana' as age or anything.
[16:27:28] <_ritchie_> StephenLynx: that’s what we’re using Mongoengine for
[16:27:49] <StephenLynx> IMO, that is complete overkill.
[16:28:10] <StephenLynx> you are using a monster truck to go for groceries around the corner.
[16:28:21] <StephenLynx> if that is all you are doing with mongoengine.
[16:30:35] <_ritchie_> StephenLynx: I’m starting to think you’re correct
[16:36:31] <_ritchie_> StephenLynx: so would a more typical use case for Mongoengine be when you have a Document that once instantiated as a Python object also has business logic associated with it?
[16:36:47] <StephenLynx> I don't know, I avoid to use anything on top of the driver.
[16:36:52] <StephenLynx> so I am a little biased on that.
[16:37:03] <StephenLynx> and I don't even use python :v
[16:37:09] <_ritchie_> ha
[16:37:31] <StephenLynx> but from what I know these things do, using just schema validation is way, way, waaaay too small to justify their usage.
[16:55:37] <GothAlice> _ritchie_: StephenLynx is particularly down on Python. :P
[16:55:48] <StephenLynx> :v
[16:56:05] <GothAlice> _ritchie_: However, if light-weight data validation is all you really need, you can glue https://github.com/marrow/schema#4-validation in there.
[16:56:24] <GothAlice> (It's a very tiny library for doing data schemas, transformation, and validation, with more tests than lines of code. ;)
[16:56:51] <_ritchie_> ah thanks that is helpful
[16:58:20] <StephenLynx> speaking about LOC
[16:58:28] <StephenLynx> lynxchan broke 16.
[16:58:35] <StephenLynx> lynxchan broke 16.300 LOC today :D :D
[16:58:52] <StephenLynx> that after a small refactor that cut some LOC.
[17:00:09] <agend> hi - what happens when i insert document - and it's _id is already taken - does it give error?
[17:00:39] <StephenLynx> yes.
[17:00:47] <StephenLynx> with code 11000 for duplicate index.
[17:00:55] <StephenLynx> you don't have to provide an _id though, it is auto-generated.
[17:01:07] <agend> i know - thanks
[17:04:13] <agend> is it possible to make insert as update - some operators magic maybe - or the other way - update as insert?
[17:04:23] <StephenLynx> upsert
[17:04:23] <StephenLynx> yes.
[17:04:35] <StephenLynx> look for the upsert option.
[17:05:52] <agend> it should work
[17:23:59] <blizzow> Can someone here explain what the logging situation is with mongo 3.x? Manually adding a kill -SIGUSR1 $mongoPID and doing gross regex to search for and manage timestamped mongo log files seems hackish. What is the proper way to rotate logs with mongo 3.x?
[17:31:37] <aps> I'm getting the following error. What does this mean and how can I fix this?
[17:31:37] <aps> WriteConcernException: { "serverUsed" : "ec2-x-x-x-x.compute-1.amazonaws.com:xxx" , "ok" : 1 , "n" : 1 , "updatedExisting" : true , "err" : "waiting for replication timed out at shard-a" , "code" : 64}
[17:48:22] <ciwolsey> hmm
[17:48:40] <ciwolsey> if im search date ranges does mongo automatically deal with time zone differences?
[17:49:14] <StephenLynx> I think it stores them all using a standard zone.
[17:49:52] <ciwolsey> so no matter what time zone i send it converts them to UTC or something?
[17:49:58] <ciwolsey> but im wondering if it does the conversion
[17:50:30] <StephenLynx> Not sure.
[17:51:35] <ciwolsey> "MongoDB stores all dates and times in UTC. This allows the timezone support and translation to be done application side instead of server side."
[17:54:04] <ciwolsey> see if im storing everything utc
[17:54:10] <ciwolsey> and thats all mongo uses
[17:54:28] <ciwolsey> and im doing a date range search
[17:54:47] <ciwolsey> does that mean i have to convert all my dates from the local time zone to utc before performing the search?
[18:02:27] <kali> ciwolsey: the driver should do that transparently
[18:02:52] <kali> ciwolsey: details vary on the language you use, obviously
[18:29:36] <ciwolsey> kali,
[18:29:44] <ciwolsey> so if im saving everything in utc
[18:29:51] <ciwolsey> and i supply a date using another time zone..
[18:30:03] <ciwolsey> will mongo convert it to utc before querying?
[18:30:16] <ciwolsey> im using javascript
[18:31:22] <StephenLynx> since mongo converts everything before storing, I guess it would also convert to UTC when using a supplied date.
[18:31:34] <ciwolsey> would make sense
[18:32:06] <ciwolsey> but i wasnt even entirely sure it converts when saving
[18:32:12] <ciwolsey> since my local time matches utc
[18:38:30] <kali> i'm not sure if that will clarify it in any way, but let met try: mongodb date is necessarily UTC. there is no provision to store a separate timezone in the database.
[18:39:04] <ciwolsey> ok
[18:39:13] <ciwolsey> But you can supply them along with a timezone
[18:39:35] <ciwolsey> "Thu Jul 23 2015 19:37:46 GMT+0100 (BST)"
[18:39:38] <ciwolsey> this is the format im using
[18:39:41] <StephenLynx> which will cause the date to be converted to UTC
[18:39:43] <ciwolsey> which mongo accepts as a date
[18:39:50] <ciwolsey> ok
[18:39:59] <ciwolsey> so its not simply ignoring the timezone?
[18:40:03] <StephenLynx> no
[18:40:10] <ciwolsey> thats what i was trying to understand
[18:40:18] <ciwolsey> and for queries i assume this is also true
[18:40:25] <kali> the database ignore the timzeone, but the driver and the language will make sure they provide utc to the database
[18:40:32] <ciwolsey> so i can actually store AND query supplying a different timezone
[18:40:33] <StephenLynx> ah
[18:40:44] <ciwolsey> oh
[18:41:01] <ciwolsey> im actually using meteor
[18:41:04] <ciwolsey> which runs on top of node
[18:41:17] <ciwolsey> so i assume the driver is of good quality
[18:41:31] <kali> i'm pretty sure it will do the right thing
[18:56:27] <iSpoof> hi
[18:56:39] <iSpoof> how can I add a new field to all existing documents in the collection, but not appended at the end of the documents but in some specific position? I've been checking $position and $push but this is for array values. Is that doable after all? I'm thinking on the functionality that RDBS implement through ALTER TABLE .. AFTER .. but I don't know if MongoDB db.collection.update() provides some way to do that
[19:03:47] <deathanchor> iSpoof: mongodb = schemaless.
[19:03:58] <deathanchor> what do you want to add?
[19:04:31] <deathanchor> also you don't get much choice in the ordering of the data within a document in mongo.
[19:06:51] <iSpoof> deathanchor: I see... so I'd need Mongoose or something to emulate schemas and put the field in arbitrary positions?
[19:07:48] <iSpoof> I understand that for a machine, the position of the key in a json object is utterly irrelevant, i just wanted to have the new field close to the top so its easier to read in the MongoLab ui : )
[19:08:07] <deathanchor> don't know what mongolab is
[19:08:39] <iSpoof> ah its a SaaS provider of Mongo databases in the cloud
[19:09:16] <iSpoof> anyways, i got the answer i needed, and makes plenty of sense. thanks deathanchor
[19:09:34] <deathanchor> yeah all schema/ordering is done on the application endpoint, mongodb doesn't provide any methods for what you are asking.
[19:09:59] <iSpoof> cool. its awesome nonetheless : p
[19:10:48] <deathanchor> yeah, I just get what I need when I query: .find({ search : "forthis"}, { show : 1, me: 1, this: 1} )
[20:33:39] <pokEarl> Uhm so some questions around resource use and optimal performance. Like lets say you have a document with 3 values of equal length/type in it. Is getting all of them 3 times as resource intensive as just getting one? So you should always just get out the values you need? Does nested or non nested have any effect?
[20:34:48] <StephenLynx> I have a hunch most of the load goes into looking for the document.
[20:34:51] <StephenLynx> not sure though.
[20:35:30] <StephenLynx> after that, how much load it takes to handle the found document depends on other operations (aggregation stages) and how much RAM it needs to hold the found documents.
[20:35:34] <StephenLynx> IMO
[20:35:38] <mike_edmr> right
[20:36:31] <mike_edmr> requesting specific fields has an infinitesimal cpu penalty compared to just serving up the doc as-is
[20:36:42] <mike_edmr> doesnt affect lookup time
[20:36:56] <mike_edmr> but does reduce latency of data delivery and use less bandwidth
[20:37:10] <mike_edmr> by sending less
[20:38:27] <pokEarl> ook
[20:38:56] <StephenLynx> in general, it is more efficient to project only the information you want to use.
[20:39:28] <StephenLynx> but for very obvious reasons.
[20:39:45] <StephenLynx> nothing about mongo's intricacies.
[20:40:58] <mike_edmr> yeah just network mechanics
[20:42:40] <pokEarl> Yeah =p I am making an API as a layer over mongoDBs (with Java/jackson returning Json) and I have classes representing the different documents and was considering if I should be making tons of different methods to get out all the different combinations of values or something, sacrificing a lot of potential flexibility
[20:44:53] <mike_edmr> hmm
[20:45:39] <mike_edmr> I wouldn't think you'd want your object methods to be making individual calls for each value when you might be accessing multiple methods that could have been supplied by a single mongo call
[20:46:12] <mike_edmr> if that makes sense
[20:47:18] <pokEarl> I am not/what do you mean? obviously it makes sense but not sure what I gave that impression by :P Will still use projections where it makes sense obviously but not going to create a million different methods with different projection variations for every getById type method or whatever =p
[20:50:39] <mike_edmr> if you had a customer document and a Customer class that wraps it with getName and getAddress, you wouldn't want a getName and a getAddress to make seperate calls to mongo
[20:51:00] <mike_edmr> so it'd be best to provide methods that do all the things you need to do in a single call for a particular use
[20:51:42] <mike_edmr> like if you have a function that needs access to name and address, make a getNameAndAddress
[20:56:32] <pokEarl> nono I don't, i just make 1 call for the document and it gets mapped to the class by Jackson/Mongojack
[20:57:12] <mike_edmr> right but that may not be optimal if all you need is part of the document :)
[21:02:41] <pokEarl> No, but I was not sure how suboptimal it is which is why I asked ;P And the conclussion I reached is that obviously there is some performance to be gained. But it is not worth creating a semi-infinite amount of different verisons of every single call you might want just to apply different projections depending on the API call. Like its fine just having 1 getDocument/Id method which returns the document by the id, instead of tons of different methods
[21:02:41] <pokEarl> for every getDocument/Id combination of Values. Like with 15 different values in a document I am sure that represents a pretty large amount of different ways the fields can be combined =p
[21:53:20] <pokEarl> is there a way to get a collection to show the "maxed out" documents if that question even makes sense? I guess maybe that is not possible because of schemalessness or something?
[21:55:05] <pokEarl> Like if theres document 1 through 1 million with value1 through 10, but then the next document has a value 11, is there a way to find all such values without knowing beforehand what they are, and not going through all the documents? :P
[21:57:46] <mike_edmr> you would either need to go through the documents with a tablescan or create an index to optimize that query
[21:58:38] <pokEarl> myeah
[22:46:38] <vzp_> does mongorestore block the database until finished when I call it with --drop?
[22:55:32] <joannac> vzp_: define "block". mongorestore is continuously sending writes, so your reads will probably end up waiting