PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 6th of March, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:15:55] <raminnoodle> I have an object that only has 2 parameters price & time. In mongo is it better to insert a new object into a collection everytime a price change occures, or have a single object in the collection with a nested array that cointains the collection of objects? The reason I would use a nested an array within an object is so I could use unshift & pop method to keep a rolling 5000 count without
[02:15:55] <raminnoodle> having to do table managment. am i over thinkin it or is there a best solution for so many objects?
[02:17:28] <rcombs> mongodb has a mechanism to keep a collection at a particular size already
[02:17:39] <raminnoodle> oh nice
[02:18:04] <rcombs> and if I remembered what it was called, I'd link you to it
[02:18:17] <raminnoodle> haha its cool i got google
[02:18:45] <rcombs> doing a nested array would definitely not be your best choice here
[02:19:02] <raminnoodle> should i just use redis...
[02:19:08] <raminnoodle> or is mongo up to speed
[03:49:27] <raminnoodle> rcombs: capped collections!
[03:49:40] <rcombs> huzzah!
[06:30:12] <muraliv> hi, is there a way to replicate particular database alone in mongodb. I don't want all the databases to be replicated by default.
[06:41:00] <LoneStar99> for anyone here that needs a MongoDB hosted solution for whatever -> https://www.pozr.in we just added a php/ mongodb connection sample
[06:41:23] <LoneStar99> https://groups.yahoo.com/group/pozr/files/pozr_mongodb_030514.zip
[06:42:03] <LoneStar99> let me know if yall have any questions
[07:56:50] <chandru_in> When a document is removed, does mongodb just mark it as deleted?
[08:04:07] <mboman> Can I join two collections in a query, I have a GridFS with files and a collection with metadata about the files and I want to query all files from db.fs.files that doesn't have a particular key set in the other collections?
[08:04:48] <mboman> I have the md5 hash in both collections which I can use as key
[08:05:12] <mboman> Zelest, cool domain name :-)
[08:06:30] <Zelest> hehe, thanks
[08:47:58] <mboman> Let me rephrase my question / problem. Is there any drawbacks of using db.fs.files collection for storing metadata about the files stored in db.fs.chunks?
[09:35:39] <ixti> does anybody ever was in a situation when ruby driver hangs?
[09:36:38] <ixti> sometimes (pretty often) i see my sidekiq worker stuck for 1 day, checking backtrace - it looks like mongo driver waits for response
[09:45:27] <ixti> this is a relevant backtrace: https://gist.github.com/ixti/9386260
[11:00:12] <Guest45591> hello, is anybody familar with mongobe
[11:00:16] <Guest45591> mongodb
[11:01:26] <kali> ....
[11:01:32] <Nodex> lol no, we all just sit in the chan for no reason
[11:02:41] <Guest45591> well i don't use irc that much and i don't like to assume anything
[11:02:59] <Guest45591> i take it that you are versed in mongodb
[11:03:15] <Guest45591> may i ask you a question? (after this one)
[11:08:49] <Nodex> best to just ask :)
[11:10:39] <Guest45591> ok, thank you. I am using darlang and having problems with mongo connection. The problem could lay that i am not using Futures but also could be that I am not using mongo Cursors correctly
[11:11:13] <Guest45591> if I wanted to get the information from the cursor do i have to close cursor as well
[11:11:44] <Guest45591> there is not alot of documatation on the dartlang mongo db connector
[11:13:11] <Nodex> I've never used Dart, might be an idea to ask in the google group too?
[11:13:34] <Guest45591> that is true. dartlang is a javascript replacement it is very new
[11:14:05] <Guest45591> i think i will try dartlang they may have a better understanding
[11:35:17] <ixti> Guest45591: what connection problem do you have?
[12:11:04] <jinmatt> I have installed munin-node an EC2 mongodb server, but I'm not getting the cpu graph at MMS
[12:11:43] <jinmatt> when I test with telnet [ec2 public dns name] 4949 the connection gets closed
[12:12:12] <jinmatt> Escape character is '^]'.
[12:12:13] <jinmatt> Connection closed by foreign host.
[12:12:21] <jinmatt> any idea?
[12:38:17] <noqqe> hi
[12:38:25] <jinmatt> hi
[12:38:52] <noqqe> quick question about configsrvers: does it makes sense to run the 3 configservers on some of the shard nodes?
[12:39:40] <noqqe> i have some problems to get 2 mongo processes started with different config files on one machine :/
[12:50:39] <masan> how to copy local database to remove server database which has credential. I tried like db.copyDatabase('source', 'target', 'localhost:27017')
[12:50:51] <masan> But it throws unauthorized
[12:51:59] <jinmatt> no need to specify port
[12:52:09] <kali> noqqe: it does make sense. the way to achieve that depends on your distribution and your startup scripts
[12:53:31] <noqqe> kali: thx. but there are 2 processes with different configs and mongodb is okay with that, right?
[12:53:50] <kali> yeah, you need to have different dbpath and port
[12:54:37] <noqqe> okay - then i have to debug these SLES startup utils within sysVinit scripts. thank you!
[14:19:44] <ahawkins> hello again everyone :)
[14:20:59] <Nodex> everyone again hellow :)
[14:21:03] <Nodex> -w
[14:22:31] <ahawkins> i a document like this: email, site, admin. I need to do an or query to look up stuff: (email == example && market == example && !admin) || (email == example && admin). Also there must be unique emails in either side of the or. Should I keep these things in the same collection or break it up into two?
[14:23:17] <Nodex> same collection or document?
[14:23:51] <ahawkins> ? right now there is only one collection
[14:24:37] <Nodex> what do you think a collection is if you were to compare it to a relational database?
[14:24:51] <ahawkins> table
[14:25:13] <Nodex> ok so again I ask, same collection or same document
[14:25:41] <Nodex> i/e your document would look like this ... {email:"foo", admin:true, site:"bar"}
[14:26:06] <ahawkins> yes it looks like that atm
[14:26:43] <Nodex> ok that's a document not a collectin
[14:26:47] <Nodex> collection
[14:28:35] <ahawkins> ya, is is possible to index that collection correclt?
[14:28:38] <ahawkins> correctly*
[14:29:00] <Nodex> it would need two indexes
[14:29:37] <ahawkins> ya, I have email/admin (unique) and market/email/admin (unique)
[14:30:13] <Nodex> infact you can do it with one compound index
[14:30:41] <Nodex> email + admin + market, then you can query email + admin and email + admin + market on the same index
[14:32:34] <ahawkins> so essentially every field in the document in a compound index. Would that work with an $or query?
[14:33:57] <Nodex> the fields are IN the compound index. And regarding the $or, it's not somethign I've ever tried so you would have to explain() it and see
[14:36:04] <Nodex> http://docs.mongodb.org/manual/core/index-compound/#prefixes <--- more info can be found there
[14:39:59] <ahawkins> can you index on nested fields like foo.bar ?
[14:40:20] <cheeser> yep
[14:41:17] <arunbabu> I have a mongo db with around 1000 product info. I need to build a ui where a user should be able to filter products based on different fields, sort then etc. Are there any OOB solutions available?
[14:44:25] <arunbabu> What would be the best way to expose mongodb via a web ui where the user is able to read/filter/search records?
[14:46:33] <mylord> Should "Apps" and "UserActivity" typically be in different DB's, diff collections, same collection? (For a site that collects user stats from lots of 3rd party app)
[14:49:00] <Nodex> mylord : really depends on your access patterns
[14:49:31] <Nodex> arunbabu : that's the $64k question, depends on your app and how you use 9t
[14:49:32] <Nodex> it*
[14:52:46] <ahawkins> should indexes be created on a session in safe mode?
[14:52:59] <arunbabu> Nodex: My usecase is very simple. I have little information with me and I need to share it with friends who may be interested in the same (most should be like select * where name=a and cost<100 in mysql) Any suggestion about creating a ui for the same?
[14:53:17] <Nodex> ahawkins : no
[14:53:22] <arunbabu> *ui accessible through browser
[14:53:48] <Nodex> arunbabu : by creating an application to query the database - just like any other app
[14:54:37] <arunbabu> Nodex: hmm.. so there is no easy way to do it ?
[14:56:01] <Nodex> that is easy ? what's hard about it?
[14:57:01] <arunbabu> Sorry for my ignorance but I have written any things before. You are suggesting to do this in javavascript right?
[14:57:32] <arunbabu> as some application that runs on top of node ?
[14:59:07] <ahawkins> removing all documents will reindex things right?
[15:00:32] <cheeser> well, it'll remove all the index entries
[15:01:00] <ahawkins> ah I was forgetting to drop the indexes in my tests...still working this all out :)
[15:02:06] <Nodex> arunbabu : write it in whatever you feel comfortable wiht
[15:02:08] <Nodex> with*
[15:08:15] <scruz> hello. i'm trying to run an aggregate query to get results like this: https://dpaste.de/DWbk. is this currently possible?
[15:09:00] <cheeser> what are you actually getting?
[15:09:59] <scruz> cheeser: i'm getting a total for each unique combination of section.
[15:10:53] <cheeser> pastebin?
[15:13:40] <richthegeek> hey, I'm asking the years-old compression question again! I've got a system that uses 12gb of disk to store approximately 900mb of data (or 180mb compressed!) .... is this ever going to be addressed, or fixed?
[15:13:58] <scruz> something like this: https://dpaste.de/o9ad
[15:14:50] <richthegeek> I understand the "why it uses lots of disk space", but given that it's demonstrably possible to NOT use so much disk space the question I'm asking is : when is Mongo going to use a reasonable amount of disk space?
[15:18:40] <scruz> i can retrieve the results and postprocess them, but i'm just asking if there's a way to do it all inside MongoDB directly
[15:19:25] <Derick> richthegeek: 12gb for 900mb sounds odd
[15:19:57] <richthegeek> Derick: it's across 8 databases with ~20 collections each
[15:20:21] <Derick> oh, right
[15:20:24] <Derick> the 8 dbs will do that
[15:20:49] <richthegeek> sure, but that's really an explanation for why the bug exists
[15:21:31] <Derick> it's not a bug. it's a performance optimisation
[15:21:43] <Derick> sacrificing diskspace for speed
[15:22:03] <cheeser> the gods must be appeased
[15:23:02] <richthegeek> Imagine I have a Hoover which heats up over time, to the point of setting my house on fire. Given the existence of Dyson (which dont set my house on fire), is Hoover saying "it's because we spin the motor really fast to improve suction, friction causes fires over time" actually an answer to the problem?
[15:23:24] <richthegeek> sorry that sounded kinda dickish towards the end!
[15:24:03] <cheeser> well, when your server catches fire then that comparison will be valid
[15:24:17] <richthegeek> but it's just that the past times I've asked this question people have just explained "why" the problem exists, rather than addressing if or when it will ever be addressed or solved
[15:24:38] <joannac> I would say it's more like it spins faster so it catches more lint. If you don't want to clean out the lint then of course it'll catch on fire.
[15:24:43] <cheeser> it's on the radar, yes. "when" and "if" are still undetermined.
[15:24:49] <richthegeek> "ever"?
[15:24:55] <joannac> If you clean out the lint you get the good performance without the chance of fire
[15:25:03] <richthegeek> and this lint-cleaning process is....
[15:25:15] <joannac> repair / resync.
[15:25:21] <richthegeek> so like a de-frag?
[15:25:26] <cheeser> yeah
[15:25:28] <richthegeek> which involves downtime...
[15:25:45] <richthegeek> so that's really just a band-aid
[15:26:53] <cheeser> it's kinda like the weather. you can complain all you want but it won't change anything.
[15:27:17] <richthegeek> that doesnt really mesh with the "on the radar" comment
[15:27:23] <richthegeek> if you are now saying it's a "wont fix"?
[15:28:01] <richthegeek> I'm just concerned because the issue has been known and open on the bug tracker for a looooooong time, with a LOT of people saying "i'd really like this to be solved"
[15:28:04] <cheeser> no one's saying that at all
[15:28:12] <richthegeek> you literally just said that
[15:28:23] <cheeser> i literally did not, in fact.
[15:28:36] <richthegeek> "you can complain all you want but it won't change anything."
[15:28:56] <cheeser> it'll still be raining when you walk out the door.
[15:29:10] <cheeser> nothing in that said that compression will never happen.
[15:29:22] <richthegeek> but knowing the existence of umbrellas, is rain something to borne?
[15:29:44] <cheeser> isn't an umbrella just a band-aid for the bad weather, though?
[15:29:52] <richthegeek> well then analogy is wrong
[15:30:03] <cheeser> it isn't. you just don't like the answer.
[15:30:23] <richthegeek> I guess the question at heart can be "Why isnt this an issue with Postgre, TokuMX, RethinkDB, ElasticSearch, .........."
[15:30:49] <cheeser> postgresql went years with a manual vacuumdb command.
[15:30:55] <richthegeek> or perhaps "why can't the solutions used by the above be applied to Mongo" ...
[15:31:06] <cheeser> when they had less important things to focus, they made that automatic.
[15:31:20] <cheeser> well, patches are certainly welcome.
[15:31:36] <cheeser> i mean, unless you have more important things to focus on.
[15:32:04] <richthegeek> I'm not a database developer, so that's a pretty glib thing to say (and you know it is)
[15:33:01] <richthegeek> it'd just be nice to know if it is ever going to get a solution that is, to the developers, transparent
[15:33:19] <richthegeek> and as part of that answer, some sort of timescale
[15:33:39] <richthegeek> which is something that large parts of the community have been asking for on the issue for an absurdly long time
[15:33:44] <cheeser> it's a pretty glib thing to say "just use what those guys did" without the experience or expertise to measure the suitability of those approaches.
[15:34:26] <cheeser> anyway, you have your answer. i'm moving on to more productive things.
[15:34:31] <richthegeek> I dont though
[15:34:35] <richthegeek> we're exactly where we started
[15:34:55] <richthegeek> with the answer as "we'll fix it sometime between now and the heat-death of the universe"
[15:35:00] <richthegeek> which is not an answer at all
[15:35:16] <cheeser> it is. just not one you like. which is fair. but that's how things stand.j
[15:35:25] <joannac> Okay, ideally, what kind of answer do you want?
[15:35:27] <cheeser> there's like a jira ticket on it. vote on it.
[15:35:32] <joannac> We'll fix it on X date?
[15:36:29] <richthegeek> some sort of timescale would be nice, even one which is as simple as "won't even start trying to fix it until at least 2015" or "roadmapped for version 24"
[15:36:51] <Derick> we don't plan that far in advance
[15:37:01] <richthegeek> it's just that the complete lack of communication on this and a few other long-running issues is the most infuriating thing about working with Mongo
[15:37:59] <richthegeek> and in a related note, the utter hostility when someone asks you "what the fuck"
[15:38:10] <richthegeek> I know it's a community product, so you have total control over what you actually do with your time
[15:38:40] <joannac> richthegeek: hostility?
[15:38:54] <richthegeek> but, some level of maturity when saying "actually, we dont care about that issue until at least X other things are done" instead of some vague promises of "we'll do it maybe sometime in teh future probably"
[15:40:24] <joannac> richthegeek: I can't speak on behalf of engineering, but tbh i've seen this maybe 3 times in 500 tickets I've looked at, and usually the proposed workaround is fine.
[15:40:31] <joannac> So I don't see it as a priority
[15:42:19] <richthegeek> which is odd, because it's something I see get mentioned in a lot of threads about mongo on HN or reddit...
[15:43:44] <Nodex> in a bystander pov, I don't see it asked for all that often but it would be nice
[15:44:17] <cheeser> it would be, for sure. but isn't as pressing as certain other features.
[15:45:45] <richthegeek> so perhaps I'm just not seeing the channels of communications ... is there anywhere publically that says roughly what the timeline of major features and changes are planned?
[15:45:58] <richthegeek> for me JIRA is pretty horrible to use in that regard...
[15:46:53] <richthegeek> and it seems doesnt really plan very far ahead
[15:47:51] <cheeser> that's pretty much it.
[15:49:27] <richthegeek> you really dont plan more than 3 weeks ahead?
[15:49:37] <cheeser> um...
[15:49:40] <joannac> Where did you get 3 weeks from?
[15:49:56] <richthegeek> release date for 2.4.10
[15:50:04] <richthegeek> which is the most recent one with issues assigned on the roadmap
[15:50:30] <richthegeek> oh wait, stuff at the bottom there haha
[15:51:04] <richthegeek> but then explodes into so much information - is there any way to filter that to "features and high-priority fixes"?
[15:51:27] <cheeser> there's the "popular issues" tab
[15:51:43] <cheeser> but that's more from people voting than any internal prioritization, necessarily.
[15:51:57] <cheeser> the fix version is a guide but can be fluid
[15:52:11] <jsantiagoh> Hi All. Would it be ok to run the mongos process in the same servers where the configuration mongod(s) are running for a sharded cluster?
[15:52:13] <richthegeek> I suppose something that would be great to see (in OS projects in general) would be a "State of the union" blog post every half-year or so
[15:57:59] <joannac> jsantiagoh: for production or testing?
[15:58:13] <jsantiagoh> for production
[15:58:35] <joannac> jsantiagoh: on what basis? don't have the hardware?
[15:59:02] <joannac> jsantiagoh: what's the system look like? # shards?
[15:59:27] <jsantiagoh> joannac: 3 shards, 3 replicas per shard and 3 replicas for the config servers
[15:59:42] <joannac> config servers are not a replica set
[15:59:48] <joannac> jsantiagoh: how many physical servers?
[15:59:53] <jsantiagoh> Derick: that would be the first choice but it complicates the network configuration a lot because of the setup
[16:00:05] <Derick> why?
[16:00:54] <jsantiagoh> joannac: every single one is running in a sepparate host (in the "cloud" )
[16:01:57] <jsantiagoh> Derick: because the application servers and data are running in different zones for security reasons (can't be changed)
[16:02:18] <Derick> ic
[16:02:18] <joannac> how would they contact the config servers then?
[16:02:34] <joannac> if you put your mongos there?
[16:02:50] <jsantiagoh> sorry, let me explain a bit more...
[16:03:11] <jsantiagoh> application -> load balancer -> mongos
[16:03:25] <Derick> do not put a load balancer in front of mongos
[16:03:29] <cheeser> evar!
[16:03:35] <joannac> hey, guys.
[16:03:39] <cheeser> ahem.
[16:03:41] <jsantiagoh> why not?
[16:03:42] <joannac> you can do it, but it's difficult
[16:03:46] <Derick> but let joannac handle this ;-)
[16:03:49] <Derick> she's the smart one here
[16:03:54] <joannac> awwww
[16:03:55] <cheeser> yeah. there are sublteties involved in that set up.
[16:04:13] <joannac> because if you send a request, and then send a getmore (to get more results), you have to hit the same mongos
[16:04:24] <joannac> and most load balancers don't really do that
[16:04:53] <jsantiagoh> it's a big LB, there's no problem in doing sticky sessions with it
[16:05:01] <joannac> sorry, hit the same mongos, get the same connection
[16:06:00] <jsantiagoh> assume that there's no issues in configuring the load balancer for doing that
[16:08:12] <joannac> okay. then what?
[16:08:34] <jsantiagoh> I have the feeling that there's an issue with running the mongos ont he same server as the configuration mongod but I don't know for sure
[16:08:47] <joannac> high availability. resource contention.
[16:09:41] <jsantiagoh> there's 3 config servers, so at least for now we'll deploy 3 mongos processes
[16:11:11] <jsantiagoh> the network configuration simplification is due to the fact that the applications only have to access the load balancer and cross zones at that point (if that makes any sense)
[16:12:30] <joannac> rather than have the zone crossing after the mongos? sure, I guess
[16:12:39] <jsantiagoh> the alternative would be to deploy the mongos instances in sepparate servers which seems like a waiste of resources to me
[16:13:09] <jsantiagoh> yes
[16:16:03] <joannac> HA is mostly a waste of resources except when it isn't :)
[16:16:20] <jsantiagoh> joannac: it's the same feeling iI have but I can't find any strong reason for not doing it
[16:16:32] <jsantiagoh> and the network guys are really pushing for that setup
[16:18:08] <jasjeev4> Hi, I have a simple question. If I have a bunch of unrelated information to store about users should I put it in the same collection or go for different collections?
[16:21:11] <joannac> jsantiagoh: shrug. ha and resource contention
[16:21:43] <joannac> if you have the hardware to support it and you're willing to take the risk, then sure
[16:21:57] <joannac> well, not hardware. cloud resources
[16:22:02] <jsantiagoh> =)
[16:23:38] <joannac> jasjeev4: how much?
[16:23:47] <joannac> jasjeev4: stuff you want to commonly access?
[16:24:29] <jasjeev4> Well one set of data I'll need to write to every 5 minutes or so
[16:24:45] <jasjeev4> Read well whenever a user wants
[16:25:09] <joannac> then put it together
[16:25:10] <jsantiagoh> joannac: I don't see the HA issue. If we have 3 config servers with 3 mongos running wouldn't it be "decently highly available" ? we'd have to loose the 3 for the entire thing to go down
[16:25:23] <jsantiagoh> maybe it's a stupid question, sorry
[16:25:34] <joannac> jsantiagoh: nope, one config server down stops migrations.
[16:25:53] <joannac> if you're relying on 3 mongos and are down to 2, can the 2 handle the same traffic 3 used to?
[16:26:21] <jsantiagoh> joannac: for now yes
[16:26:26] <joannac> losing the first config server introduces delays if you have auth
[16:26:48] <jsantiagoh> joannac: that's interesting, why is it?
[16:27:07] <joannac> because the first config server is important
[16:27:12] <jasjeev4> What's the advantage of having it together? I forget to mention that the set of data aren't required at the same time.
[16:27:27] <joannac> jasjeev4: you don't have to try and join data from 2 collections
[16:27:51] <joannac> okay, i gotta go
[16:27:59] <jsantiagoh> joannac: thanks a lto of rthe data
[17:02:27] <rickibalboa> I have the following query; https://ghostbin.com/paste/3baq9 I only want to see the grouped rows that have a total of > 1 how can I do that? I'm messing around with finalize but not getting far
[18:27:17] <valonia> Is it possible to make this query in mongodb
[18:27:18] <valonia> SELECT DISTINCT queue.* FROM queue INNER JOIN accounts ON queue.accid = accounts.accid AND accounts.lastlogin > 1256374 AND accounts.region = 'EU' AND queue.hostname IS NULL GROUP BY queue.accid ORDER BY queue.id ASC
[18:31:43] <kali> valonia: it depends how your data is modeled in mongodb
[18:31:57] <valonia> kali: Its exactly the same
[18:32:21] <kali> you mean a queue collection and an account collection ?
[18:32:37] <valonia> Yes
[18:32:40] <kali> then no.
[18:32:53] <valonia> ahm
[19:15:42] <unholycrab> is it considered bad form to run your mongos instances on your replica set members?
[19:15:47] <unholycrab> instead of your "application servers"
[19:16:16] <kali> not really... whatever is the most convenient
[19:17:05] <kali> unholycrab: in our case, we have a mongos on each and every node of the grid, that way, we can always connect to localhost:27xxx to get the mongos
[19:17:39] <unholycrab> nice. so you just run it on every machine that would need to access the mongo cluster
[19:19:24] <kali> yes
[19:22:23] <joannac> kali: interesting. how many nodes?
[19:25:43] <Joeskyyy> I prefer to have it on my application nodes so the initial mongo connection latency is nothing at all, of course the mongos process may still take longer to report back what you need. But it's better than waiting for the connect, then waiting for the response.
[19:26:36] <Joeskyyy> But with internet speeds kids have these days...
[19:51:38] <bmw0679> Hello, I have a situation where I've run out of disk space on my replicaset. The primary is down and in recovering and I'm wondering if I can remove the db files directly to help free up space or if that will just make the problem worse.
[20:06:29] <joannac> erm
[20:06:37] <joannac> you've run out of space on all 3 nodes?
[20:11:21] <bmw0679> The primary node... and the whole thing shutdown.
[20:12:15] <joannac> hmm
[20:12:27] <joannac> no failover?
[20:13:23] <bmw0679> apparently not, I'm not the one that did the setup and trying to get things back up and running
[20:13:31] <joannac> firstly, failover to get the replica set up
[20:13:38] <joannac> shut down the primary and get more space
[20:14:01] <joannac> not sure how deleting db files will help unless you want to initial sync and get rid of fragmentation
[20:17:26] <bmw0679> that makes sense. basically use stepdown on the primary and shut it down and then get more space on the box right?
[20:18:25] <joannac> yes
[20:18:52] <joannac> my understanding is that you should still be able to shutdown the server even when you get disk full
[20:20:01] <joannac> s/shutdown/step down/
[20:20:22] <bmw0679> ok
[20:20:34] <bmw0679> I'll give that a go... thank you for the response
[20:33:53] <kali> joannac: thirty something... we may need to change strategy if we get more nodes because of the number of connection, but for now it's nice and simple
[21:59:49] <acalbaza> i have a document with an array of objects... is there a way to use aggregation framework to project ONLY a particular object in the array? http://pastie.org/8884817
[22:05:52] <joannac> acalbaza: $unwind first
[22:14:42] <Number6> joannac: I'm currently $unwind'ing!
[22:19:09] <joannac> Number6: on holiday?
[22:19:11] <joannac> Number6: BED
[22:46:29] <sandstrom> I've got an issue (v 2.2.1) where I have a document with a null key (when looking at the document in console) but where db.mycollection.find({ "my-property": { "$type": 10 } }).count() === 0
[22:46:33] <sandstrom> Does that make any sense?
[22:47:31] <sandstrom> Looking at it there is `{ "_id" : ObjectId("52726607c96b066691008950"), '...' : '...', "my-property" : null }`
[22:48:19] <sandstrom> (type 10 is supposed to match null values; http://docs.mongodb.org/manual/faq/developers/#faq-developers-query-for-nulls)
[22:50:49] <McSorley> Is it possible to infer type with mongoimport i.e for a date datatype?
[23:41:19] <mezgani> hello